Mapping AI — Theory of Change

The Endless Frontier, Revisited

Working Draft · March 2026
The problem

AI represents a step change that demands a renegotiation between government, industry, and the public. In short, a new social contract. But the conversation today is stuck in old paradigms: the US vs. China, accelerationism vs. safety, regulation vs. laissez faire. None of these address the fundamental question: on what terms is the value of American technological innovation captured?

The historical record offers a clear model: when a revolutionary technology arrives, societies that build new institutions to meet the moment are able to distribute that technology's benefits broadly. Consider American electrification. By 1935, forty years after the construction of the first power plants, 90% of urban homes had power, but only 10% of rural ones did. Rather than subsidize private utilities or wait for the markets to adapt, FDR and the New Deal coalition created the Rural Electrification Administration, Tennessee Valley Authority, and community-owned financing structures. By 1950, the rural/urban electricity gap had closed.

Our existing institutions were not designed to handle the AI transition. The question is: can we build the new institutions we desperately need?

Crucially, AI is different from electricity. Its adoption is faster than any other technology in history. Its risks are existential. And it's owned by a handful of firms, controlling not just a market or an infrastructure, but the frontier of knowledge production itself. These companies built their products upon decades of publicly funded research, open-source infrastructure, human-generated data, and combined ingenuity, yet have no obligation to return value to the public—and face no real consequences if something goes terribly wrong. There is no compact governing this relationship. Our shared inheritance depends on forging a new one.

The existing landscape

Each camp has a diagnosis. None has a compact. This is the vacuum we are filling.

The right
Win the race
Deregulate, accelerate, and dominate. The US-China frame is a prominent motivator. Distribution is an afterthought; workforce training is the concession to labor. Preempting state regulation is the active political agenda right now.
Democratic establishment
Safety and transition
Privacy, civil rights, job protections, and responsible deployment. Proposes taxing AI gains to fund workforce retraining, and generally frames labor issues in terms of AI deployment. Proposals have not yet cohered into a structural compact.
Safety / alignment — short timelines camp
Race to safety through speed
AGI is imminent; the priority is ensuring the right actors win. Concentrated control in "responsible" Western labs is preferable to fragmentation. A subset argues that alignment is unsolved and building superintelligence before it is solved risks catastrophic, potentially irreversible outcomes.
Safety / governance — institutional camp
Govern the technology, not the race
Focused on near-term harms, accountability, auditing, and institutional design, but still largely defensive and decoupled from distribution and labor. Internal factions argue that AI will diffuse slowly, giving institutions time to respond. Power concentration is a key point of focus.
Our operating principle

Commonwealth: AI for the public, building from how innovation is financed, how infrastructure is constructed, and how knowledge is maintained. The emphasis is on distributing gains to the public by design, rather than downstream redistribution alone. Bringing together labor, safety, national security, and institutional design, our aim is to proactively ensure that the American public captures the value of American innovation. The overarching mandate is to secure public return.

The mechanisms
01 — Institutional construction
Government must create, not just regulate or subsidize. The New Deal's REA both regulated private utilities to serve rural America and built new institutional forms that private utilities never would. A new compact for AI requires the same constructive ambition, to three new ends:
Measurement: continuous assessment of how capabilities develop and how broadly benefits flow.
Access: development of public compute and AI infrastructure alongside private offerings.
Steering: regulation with an affirmative mandate to build, not just bind.
02 — Return architecture
Public investment should generate public return by structure, not by taxation after concentration has already occurred. The problem with redistribution as a governance strategy is that it operates on outcomes while leaving intact the architecture of who captures value in the first place. A new compact intervenes at the source, meaning:
Ownership: cooperative ownership structures where the users own the infrastructure.
Commons: shared resources are treated as public property from the start.
Financing: capital instruments that don't require monopolistic scale to be worthwhile.
These are early directions in a design space still being built out. The principle is fixed even where the instruments are not: public return by design.
03 — Adaptive governance
AI is developing faster than the pace of new legislation. Sunset clauses and mandatory review periods start to address this, but they are scheduled flexibility rather than genuine adaptability. Adaptive governance operates at three layers:
Sensing: a continuous, real-time picture of what the technology is doing and who it is affecting.
Response: pre-committed mechanisms triggered by observable conditions rather than political calendars.
Recalibration: a process for updating the response thresholds and mechanisms themselves as understanding evolves.
The closest analogs to adaptive governance come from deliberative democratic institutions, like Taiwan's Alignment Assemblies. A broader international survey of adaptive governance experiments is part of this project's ongoing research agenda.
04 — Coalitions
Safety researchers, governance scholars, labor advocates, national security strategists, civil rights organizations, and industry leaders are each developing answers to related questions, but in different languages and with different threat models. Few actors are building across divisions. We work with anyone who agrees that AI's capabilities should serve human flourishing and dignity. The governing agenda must be:
Specific: implementable policies rather than vague goals.
Staffed: personnel with technical expertise, including from industry.
Deployable: impact on day one.
Operationally, we aim for Project 2025-level preparedness, oriented towards building capacity rather than dismantling it.
05 — Pluralism
The current trajectory of AI development tends toward convergence: a small number of general-purpose models, built on overlapping data and shared architectures, concentrated in the same cities and private companies. Monocultures lack resilience. When one architecture has a fundamental blind spot, models trained on it tend to share this vulnerability. A plurality of approaches, ownership structures, purposes, and scales is a property of healthy technological ecosystems. We seek pluralism in the following issue areas:
Financing: creating markets for smaller, purpose-built models to serve communities that would otherwise be left behind.
Supply chains: investment in open-source infrastructure that serves the public good, and hardware speciation that avoids single points of failure.
Alignment: diversity in the teams conducting AI research and in the values they instill.
Causal pathway
Now — Apr 2026
Stakeholder map, working draft, core team assembly. Begin outreach to first-ring contacts. This is where we are.
May – Aug 2026
Expert review workshops validate the framework and convert reviewers into endorsers. Domain experts co-author or formally consult on each section. Industry participants see a positive-sum case for engagement, not a compliance burden.
Late 2026 — Midterms
Launch endorsed framework with co-signatories spanning academia, former officials, industry, labor, and civil society. Set the intellectual terms of the debate before the incumbent frames harden into the default.
Early 2027
Congressional champion introduces legislation in the 120th Congress, making the framework a live vehicle that 2028 candidates must engage with.
2028 Primary
Presidential candidates adopt elements of the framework. AI governance becomes a defining issue on kitchen-table terms — who benefits from American innovation — not elite policy discourse.
2029 — Day one
A new administration has a specific, staffed technology-society compact ready to implement. New institutions are not drafted — they are ready to build.