When the State Buys AI, Who Decides the Limits?

Most of the public conversation about AI still focuses on the technology itself. Is it safe? Is it biased? Which company is building the most powerful model?

Those questions matter. But when the government uses AI to help make decisions that shape people’s lives, a different question comes into view. This strikes me as more a governance question than a technical one. Who gets to decide what is permissible, what is prohibited, and what kind of oversight is required once these systems enter public life?

That question is not easy to answer, and that should concern us. It suggests that one of the most consequential boundaries in public life is being drawn in places most people never see, through processes most people are never invited to enter.

Government agencies increasingly procure AI tools from private vendors. On one level, that is unremarkable. Governments purchase complicated systems all the time. But procurement has its own logic. It is organized around budgets, specifications, timelines, compliance requirements, and vendor selection. It is built to solve a purchasing problem.

What it is not designed to do is answer a prior and more democratic question: whether a given use of AI should exist at all, and if so, under what public constraints. And yet, in practice, it is often helping to decide precisely that.

Many of the boundaries around government AI use are not first being drawn through legislation, public deliberation, or transparent constitutional reasoning. They are being shaped through contracting, vendor negotiations, internal policy choices, and administrative default. The public, whose lives may be sorted, surveilled, scored, flagged, or constrained by these systems, rarely appears as a genuine participant in that process. At most, the public is treated as a beneficiary, a stakeholder, or an eventual recipient of whatever has already been decided.

That is not simply a problem of exclusion. It is a problem of substitution. Procurement is quietly doing work that looks a great deal like governance. It is helping determine what kinds of algorithmic power the state may exercise, under what conditions, with what degree of transparency, and with what forms of recourse. But procurement was never built to carry that burden. It is an administrative mechanism, not a democratic one.

To be clear, this is not an empty field. Efforts have been made to build norms and guardrails around government AI use. International bodies have advanced frameworks grounded in human rights, transparency, democratic values, and human oversight.

Still, a pattern becomes visible once you look closely enough. The closer AI moves toward the government’s most coercive powers, including surveillance, policing, immigration enforcement, intelligence, and military use, the more often meaningful safeguards begin to thin. National security carveouts appear. Transparency obligations narrow. Public oversight becomes harder to access. The protections are strongest where the state’s hand is lightest and weakest where it is heaviest. The places where democratic accountability is most urgently needed are often the very places where it is most easily displaced.

When people notice this gap, the conversation tends to split into two familiar answers.

One places hope in technology companies. Under this view, vendors should refuse certain government uses, set internal ethical boundaries, and withhold their tools where the risks are too severe. There is some truth in that. A vendor’s refusal can matter. A company declining to build or sell a tool for a particular purpose may function as a real constraint in the absence of stronger public rules.

But that is not democratic governance. A private company deciding what the government may or may not do is still a private actor drawing the line around public power without public authorization. Terms of service are not law. Vendor restraint may be useful. It cannot be the foundation.

The second answer places greater trust in government itself. Agencies, defense officials, and public administrators understand the operational stakes, and some degree of internal discretion is inevitable. That is also true. Public institutions do have expertise, and not every aspect of governance can be resolved through direct public participation.

But internal policy is not the same thing as public accountability. When agencies define the rules for their own AI use through classification, emergency authority, or closed administrative processes, the public is left with something dangerously close to trust without visibility. Trust has a place in democratic life.

Neither of these answers is sufficient on its own. One privatizes line-drawing around public power. The other internalizes it within the very institutions exercising that power. In both cases, the people who live under the system are left at a distance.

This is one reason my work in cooperative governance shapes how I see the problem.

A cooperative is owned and governed by its members. Its principles include democratic control, autonomy, transparency, education, and accountability to the community. I do not think government should simply be reimagined as a cooperative. The analogy is not that simple. But cooperative governance does offer a disciplined way of asking what kind of relationship exists between people and the institutions that govern them.

Are people positioned as real participants in political life, with voice, information, structural protections, and some capacity to shape the rules that govern them? Or are they treated primarily as passive recipients of decisions made elsewhere?

That distinction matters here. Much of the current architecture around government AI still assumes the second model. The public is informed late, if at all. Comment is invited after the architecture is already in place. Oversight bodies often report back into the same institutional structure they are meant to constrain. Harm is framed as a set of isolated technical errors rather than a broader question about how public power is being designed and exercised.

Cooperative principles sharpen our sense of what is missing. They point toward public participation in setting boundaries rather than merely reacting after deployment. They suggest that no single vendor should become so embedded in public infrastructure that meaningful exit becomes impossible. They insist on intelligibility, not as a courtesy, but as a condition of legitimate governance. They also direct attention to community-level accountability, where the question is not just whether one individual outcome was wrong, but how a system distributes risk, visibility, and recourse across a population.

None of this is utopian. These are design choices. They are concrete, legible, and available within existing traditions of governance. The problem is not that we lack the conceptual tools. It is that we too often decline to treat them as necessary.

If democratic legitimacy is going to mean anything in this context, some commitments need to move from aspiration to structure.

Some uses of AI should be prohibited by law rather than left to agency discretion. When a technology can enable social sorting, hidden surveillance, predictive policing, or other forms of high-impact state action, the boundaries should not be drawn through informal policy or procurement default alone. Legislatures exist to make public judgments about public power. They should be doing that work here.

Procurement should also be understood as an instrument of democratic design, not merely fiscal administration. When a government becomes deeply dependent on a single AI vendor, that is not just a contracting problem. It affects autonomy, bargaining power, reversibility, and the public’s ability to revisit the infrastructure through which government acts. Portability, interoperability, audit rights, and vendor diversity may sound technical, but they function as protections against the concentration of private control.

High-stakes uses should require robust public impact assessments and meaningful public reporting. If the public cannot know the use case, the conditions of deployment, the known failure modes, or the chain of oversight, then the system is not genuinely governed in any democratic sense. It is simply being administered.

People affected by AI-driven decisions should have the right to know that such a system is being used, the ability to challenge outcomes, and access to meaningful human review. If a system can influence access to benefits, trigger investigation, alter immigration status, shape risk classification, or otherwise affect the conditions of a person’s freedom or security, secrecy about its existence should not be the default.

National security exceptions also need real boundaries. Not every use of government AI can or should be fully public. Some degree of secrecy is plainly necessary in certain contexts. But the phrase national security cannot become a universal solvent that dissolves every safeguard around authorization, review, and accountability.

The same question extends beyond domestic borders. When democratic states deploy AI in intelligence, border enforcement, or military contexts abroad, the commitments that ground legitimacy do not suddenly become irrelevant. If anything, the distance from public visibility makes the need for structure even more serious.

In one sense, this is not really a question about AI at all. Societies already know how to govern powerful, complex systems. The deeper question is whether we are willing to treat government use of AI as an exercise of public power that must remain answerable to the people, or whether we will allow it to become a privately mediated service purchased and implemented through largely closed processes.

The legitimacy of these systems turns on a few basic things. The people living under them must have meaningful voice in how they are authorized. They must be able to see enough to understand how power is operating. They must have real avenues to challenge harm. And they must be protected against the concentration of control in either public agencies or private vendors.

That is not a radical proposition. It is simply the logic of democratic governance carried forward into a new technological context.

The architecture is not settled yet. That is part of what makes this moment so important. The materials already exist. The principles are available. What remains uncertain is whether we will build the structure deliberately, while choices are still open, or wait until the defaults have hardened and call that governance after the fact.

Leave a Comment