Law needs infrastructure, not just rules: three proposals to govern powerful AI
This paper argues that building laws for artificial intelligence should do more than set rules. It says legal systems must also create the infrastructure that makes rules possible to apply and enforce. The author uses a Perspective in the journal PNAS to highlight concrete kinds of legal and regulatory infrastructure that could help manage the risks of very capable AI systems.
The paper reviews three specific proposals. First, it suggests registration regimes for “frontier models” — meaning the most advanced and powerful AI systems — so regulators would know where the most capable systems exist. Second, it proposes registration and identification regimes for autonomous agents — AI systems that can act on their own — so those agents can be tracked and linked to responsible parties. Third, it outlines the idea of regulatory markets, where private companies could compete to offer services that help regulate, audit, or monitor AI systems.
At a high level, these ideas aim to change how rules are carried out. Registration systems create records that regulators can use to monitor development and deployment. Identification of autonomous agents helps clarify who is accountable when an agent acts in the world. Regulatory markets are meant to harness private firms’ ability to innovate, so they build tools and services that help enforce standards or provide oversight.
This matters because the paper treats AI as potentially transformative — meaning it could change many parts of society — and says that without legal infrastructure, written rules may be hard to apply or keep up with new technology. Infrastructure can make enforcement more feasible, help assign responsibility, and create channels for new oversight tools to be used in practice.
Important caveats are noted by the author. The piece is a Perspective, which means it presents proposals and arguments rather than reporting new experiments or empirical tests. The ideas will need detailed design work and political choices before they could be put into practice. Questions remain about how to define key terms (for example, what counts as a “frontier” model or an “autonomous” agent), who would enforce registrations, how privacy and commercial secrecy would be handled, and how to govern private companies working in regulatory markets.