Statecraft

May 2026 · essay

The State at the Controls

AI in an apparatus that can no longer interrogate its own assumptions

by Jacob Huibers · Lees in het Nederlands →

A decision in a meeting room

In the spring of an interim assignment at a municipality, a decision was placed on my desk for signature. It concerned the procurement of an AI tool. The document was carefully prepared. It contained a page on the supplier, a page on costs, two pages on the legal framework including a reference to the European AI Act, a page on implementation planning, and half a page on expected efficiency gains. What it did not contain was an assessment of what the instrument would do to the work itself. Not the work as a process description, but the work as the place where people render judgement on other people under conditions of scarcity, uncertainty, and discretionary room.

I asked the management team whether the staff who would actually use the tool had been consulted on the decision. The answer was that this had not been operationally desirable, because the procurement process was already well advanced and the supplier had insisted on confidentiality. I asked whether the instrument had been tested against historical cases that we knew had been wrongly handled. The answer was that this kind of test had not been specified in the offer, and that the supplier had presented it as optional at additional cost. I asked whether the staff would be able to override the instrument after implementation when their professional judgement diverged. The answer was that this was possible in principle, but that the instrument was meant to “remove subjectivity from the process”, and that any deviation would automatically trigger an audit trail.

The decision did not receive my signature. What I saw in that meeting, and what I encountered three more times in different forms in the months that followed, is what this paper addresses. AI is entering Dutch public administration through an apparatus that can no longer interrogate its own assumptions. What in a healthy organisation might have become a diagnostic aid becomes, under current conditions, an instrument that reinforces existing dissociation and locks it in algorithmic form.

Bostrom’s question

In NRC of 25 April 2026, Bas Heijne published an essay reflecting on his interview ten years earlier with the Swedish philosopher Nick Bostrom, author of Superintelligence (2014).¹ Bostrom had already articulated the concern that now dominates public debate: when machines become more intelligent than humans, how do we prevent them from turning against us? His answer was that we should impart our human values to artificial intelligence from the very beginning. Heijne recalls that he began to feel uneasy during the interview. Which values, he asked himself, and the values of which people?

What Heijne goes on to describe, and describes well, is how Bostrom’s question was adopted by a Silicon Valley culture in which the answers were filled in not through broad deliberation but by a specific group of entrepreneurs. Elon Musk, Peter Thiel, Alexander Karp, Sam Altman and their associates have effectively inverted Bostrom’s idea, which had been to make the machine human before it was too late. They have come to see humans increasingly as machines, susceptible to reprogramming, with society as a collection of computers that can be restarted in the right hands. Heijne quotes the authors of Muskism (2026): Musk sees society and the state as a collection of computers, and if you want to change society you must take the right computers in hand and reprogram them.²

Heijne closes his essay with an observation that serves as compass for this paper. It is not the machine, in his view, that threatens our existence. It is the human at the controls. The Statecraft contribution to this observation is that in a specific apparatus, the Dutch public administration, “the human at the controls” is no anonymous abstraction. He or she sits within a specific architecture, with specific selection mechanisms, specific incentives, and specific blind spots. What AI becomes within Dutch government does not depend primarily on the suppliers selling it, nor on the European AI Act regulating it. It depends on the apparatus that adopts it, and that apparatus has properties I have described in earlier Statecraft publications and which are at work here again.

Between utopia and dystopia

On the page preceding Heijne’s essay, in the same edition of NRC, a different piece appears on the same subject.³ Koert van Mensvoort, director of the Next Nature Museum in Eindhoven, sketches a hopeful vision for 2030. In his image, the robot has become an ecosystem with which the human is intertwined, while AI keeps the systems running and the human gains room for what the Japanese call ikigai, a reason to get up in the morning. Homo faber gives way to homo ludens, the playing human. A nurse earns more than a surgeon, a good waiter more than a notary, because human presence in an automated world has become the scarce luxury.

Between the two pages, on an editorial level, what happens on a much larger scale in the public-administration debate over AI plays out in miniature. On one side, a utopian vision in which AI creates space for human meaning. On the other, a dystopian vision in which the human at the controls threatens the existence of others. Both pieces are thoughtful in their own right. Both work with the same material: the rise of AI as technology and as a socio-political phenomenon. What falls away between them, quite literally, because the page is printed on both sides and the space between them does not exist as space, is the place where the work is done. The zone where a civil servant in an executive agency sits at a counter with an AI tool beside her, where a department head prepares an AI decision for a management team, where a municipal secretary assesses a procurement procedure for its effect on her organisation. That zone is systematically under-represented in the Dutch public debate on AI. Not because it is less important than utopia or dystopia, but because its register, the register of working practice within a specific organisation, exports poorly to an opinion piece and lends itself poorly to illustration by either a museum director or an essayist.

What actually happens in this in-between zone is empirically in view at one Dutch institution with the mandate and the discipline to do so: the Netherlands Court of Audit. What the Court of Audit has reported in its AI investigations of 2024 and 2025 should occupy the page between the two NRC pieces. Set against Mensvoort’s hopeful perspective and Heijne’s troubling analysis stands the page that is missing, the page of what is currently happening in execution. For the Statecraft debate, that page is the most urgent. It is neither a forecast nor a warning. It is the present state of an apparatus that does not adequately deliberate on its own present state.

What the Court of Audit sees

In October 2024, the Netherlands Court of Audit published its focus study Focus on AI in Central Government.⁴ It inventoried 433 AI systems across seventy government organisations. Most systems, 167 in total, were ongoing experiments at the time of the survey. Eighty-eight per cent of organisations used no more than three AI systems. Only five per cent of the systems were registered in the public Algorithm Register. The most common applications were knowledge processing and inspection and enforcement. The organisations making the heaviest use of AI were the police and the Employee Insurance Agency (UWV).

What the Court of Audit identified as the core problem was not the scale of AI use but the absence of insight into what the AI use was actually doing. For thirty-five per cent of systems in operation, it was not known whether the system met expectations. In a substantial share of cases, no prior determination had been made of the system’s purpose, or of when it would be considered a success. The Court of Audit further noted an incentive within organisations to estimate the risks of AI systems on the low side, since a high-risk classification under the European AI Act triggers strict requirements. Thirty systems were classified as high-risk, but the actual number, in the Court’s judgement, was higher.

Half a year later, in May 2025, the Court of Audit returned to the subject on more specific terrain in its 2024 Audit Report.⁵ Three risk models were examined: two at the Tax and Customs Administration and the Benefits Agency, and one at UWV. The UWV algorithm was found largely in order. The algorithms of the Tax and Customs Administration and the Benefits Agency did not comply with the GDPR. With regard to the Benefits Agency system, designed to identify parents who would have to repay substantial childcare benefits relative to their income and who might therefore need assistance, the Court warned that information from the support process could be used for enforcement or fraud detection, and that one group might have a different chance of being offered help than another. With the Tax and Customs Administration’s algorithm for detecting carousel fraud, personal data turned out to be inadequately protected.

Aleid Wolfsen, chair of the Dutch Data Protection Authority, wrote in de Volkskrant in July 2024 that at almost every site where the Authority conducts an investigation, discriminating systems are uncovered for which a sound justification of the risk indicators is lacking.⁶ The AI researcher Joris Krijger raised the question, in the same period, whether the Dutch government has sufficient ethical infrastructure to make trade-offs between efficiency and fairness in algorithmic decision-making. Ewout Irrgang, member of the Court of Audit’s board, responded that responsibility for algorithmic decisions always lies with people, and that the Court began developing a testing framework in 2022 which has since been expanded and sharpened.

Together, these reports show that AI has been adopted at substantial scale across the Dutch public sector before the organisations adopting it have insight into what it does. That is in itself not a uniquely Dutch story, and it is not unique to AI as a technology. It is, however, a specific instance of a pattern described in earlier Statecraft publications, and it is useful to make those connections explicit.

How the dissociated organisation absorbs AI

In the Dissociated Organisations series I described four symptoms of executive agencies in which the evident error can no longer land.⁷ Under each of those four symptoms, AI works not as a corrective but as an amplifier.

The reputation architecture, the first symptom, drives organisations to produce announceable results before substantive assessment has been made. An AI tool that promises efficiency gains delivers precisely the type of announceable result this logic rewards. The announcement of implementation can take place before the instrument has been tested against real cases, and its administrative benefits can be cashed in earlier than its material effects appear. That the Court of Audit cannot give a verdict on performance in thirty-five per cent of operational systems is no accident. It is what a reputation architecture produces as a by-product: announced implementations without the instruments to evaluate their workings.

Reproduction inwards, the second symptom, drives organisations to select their senior cadre on mobility and visibility rather than on substantive weight. AI decision-making is a topic on which top officials can speak fluently without knowing the operations in depth. Anyone moving through Senior Civil Service rotations of three to four years learns AI as an administrative theme in the same general terms in which all themes present themselves. Substantive knowledge of what a specific algorithm does to a specific operation requires years of experience in that one dossier, and that experience is rarely in the senior layer any longer. The result is that AI decisions are made in a register that sells its own abstractions as content.

The absorbed debt without integration, the third symptom, provides the most explicit evidence that AI in the Dutch public sector is no innocent newcomer.⁸ The childcare benefits scandal was at its core an AI scandal. The system that classified tens of thousands of parents as fraudsters between 2014 and 2019 was an algorithm. The Fraud Signalling Facility, the CAF teams, and the blacklists that grew out of these constructions worked with risk indicators in which ethnicity functioned as a proxy. What was set up in 2020 as a recovery operation did not address the algorithmic dimension of the original problem as a learning subject. It was caught up in separate trajectories on bias, registration obligations and ethical frameworks. The Tax and Customs Administration and the Benefits Agency, where the scandal arose, in 2024 still operated algorithms that, according to the Court of Audit, do not comply with the GDPR. The single-loop response patched the apparatus, not the assumptions under which the apparatus deployed algorithms in 2014 as an instrument for classifying citizens.

Performative maturity, the fourth symptom, is by now in full bloom in the Dutch AI response.⁹ The Algorithm Framework, the Algorithm Register, the Court of Audit’s testing framework, the European AI Act, the Human Rights and Algorithms Impact Assessment, the Data Protection Impact Assessments, the supervisory functions placed with the Data Protection Authority and the Authority for Digital Infrastructure: none of these instruments is unwise on its own. Their combined operation produces a layer of compliance architecture beneath which the actual decision-making about AI in executive agencies can stabilise without the substantive column ever touching it. An organisation that adopted an AI system in 2024 without insight into its performance can pass through every required assessment in 2026 and keep the system in use, because the assessments primarily test the process of adoption and not the effect of the system on the work.

Alongside the four symptoms, the sociological dimension I described in The Apparatus’s Diploma Democracy operates with particular sharpness in the AI dossier.¹⁰ Bostrom’s question of which values and from which people is, in a Dutch public-sector apparatus, in practice answered by those who already sit at the table. The digitalisation directorates, the chief information officers, the consultancies guiding procurement, the academic experts in advisory bodies, the political advisors organising support: these belong overwhelmingly to one sociological layer. This is not a political objection. It is an institutional fact. When the table at which AI decisions fall is occupied solely by people who are themselves optimistic about AI because they do not encounter the scale of its effects at the counter and in the neighbourhood, the optimistic interpretation receives structurally more room than the pessimistic one. Not from ill will, but from a sociological gravity the apparatus itself produces.

Who is at the controls

Heijne argues at the end of his essay that it is the human at the controls who threatens our existence, not the machine. In the Dutch executive apparatus, “the human at the controls” is a sequence of specific positions. The head of an information management directorate at a ministry. The chief information officer of an executive agency. The director of digitalisation at a municipality. The procurement advisor guiding the tender. The lawyer interpreting the AI Act. The external consultant preparing the implementation roadmap. The project manager taking the system into production. By the time an AI system has an effect on a citizen, the chain of people who have had their hands on the controls is already long and specialised.

What is missing in this chain is a representative of those who do the work. The parking enforcement officer, the social welfare caseworker, the youth protection worker, the childcare inspector, the administrative-legal officer at the Tax Administration. These officials see what the system does once it is in production, and they have the knowledge to say in advance what the system will not see. In current AI decision-making they sit at considerable remove from the table where decisions fall. Their signal arrives in a register the table does not automatically recognise. By the time the signal arrives and is translated, it carries the imprint of the translator more than that of the operator.

Heijne draws on the work of Quinn Slobodian and Ben Tarnoff on Peter Thiel and Alexander Karp of Palantir. The authors of Muskism describe how Palantir no longer offers its services as a stepping stone to a better future but as a means of defending oneself against threats in a dangerous world.¹¹ The state must be made dependent on Palantir technology, in the logic of its owners, because it cannot carry the complexity itself. In the Dutch context, this logic operates in a less visible form. No single supplier forces an indispensability. The combined operation of AI suppliers, consultants, IT firms, and compliance functions produces a dependency in which the executive agency can no longer formulate its own judgement of what it does without their mediation. This too is a form of dissociation. Not the outside rewriting the inside, as in the reputation architecture. An external technological infrastructure rewriting the judgement of those who do the work.

What would work

Anyone looking for a response to the adoption of AI in the Dutch public sector finds two dominant lines of reasoning. The first is technologically optimistic and argues for accelerated implementation to capture productivity gains before international competition pulls definitively ahead. The second is regulatory and argues for stricter application of the AI Act and broader supervision. Both feed the cycle I have described, because neither addresses the apparatus itself through which implementation and supervision are carried out.

Three design choices are not in themselves revolutionary, but they run against current practice. The first is that the operational level sits at the table before an AI decision falls, not after. A representative from the floor, chosen on knowledge and experience rather than on representational title, should be part of the decision-making chain from the first exploration of a supplier. What she would bring is what an offer rarely contains: the specific operational cases against which the instrument should be tested, the specific deviations the work itself currently absorbs, the specific contexts in which, in her judgement, the instrument should not be deployed. That a procurement procedure makes such input difficult is a procedural matter. It is no reason to omit the input, because the cost difference between implementation without and with this input is substantial in the medium term in favour of the latter.

The second is that the question “which assumptions does this instrument confirm” is asked explicitly before the instrument is implemented. An AI tool for risk profiling confirms the assumption that risk lends itself to advance quantification and optimisation through patterns in historical data. Whether that assumption truly applies to the specific decisions in which the instrument will be deployed is not a technological question. It is a question about the nature of the work. Comparable questions arise with other AI applications. A tool for automated text generation confirms the assumption that the value of a text lies primarily in its content and not in the thinking that produced it. A tool for automated decision-making confirms the assumption that the discretionary space in which a decision is taken is a variable to be removed from the process rather than a value carried within it. None of these assumptions need be wrong. But without being made explicit, implementation becomes a form of philosophical obedience to whatever the supplier thinks the work is.

The third is that external supervision receives direct access to the source on the output of AI systems, not only on their implementation. I formulated this principle in general terms in the introductory paper of Dissociated Organisations, and it operates in a specific form for AI.¹² The Court of Audit can test whether an AI system meets its testing framework. What it cannot do without direct source access is verify whether the system in production actually does what implementation reports claim. That difference is large in a dissociated organisation, and in a heavily automated execution process the difference grows rather than shrinks. The same logic applies to local audit offices vis-à-vis municipal AI implementation, and to parliamentary inquiry committees that will inevitably, in the future, have to rule on algorithmic harm in some recovery operation.

None of these three choices requires a new law or a new code. What the European AI Act and its Dutch implementation already make available is sufficient for anyone willing to interpret them in this direction. What the three choices require is a different weight in daily decision-making, and that weight rests with the municipal secretary, the operations director, the secretary-general, and the interim manager who declines to sign in the management meeting.

What in the forthcoming book De Richting van de Beweging is described as the Aiki method is again applicable here.¹³ The human inclination to see AI as a technological solution to substantive problems is strong, and it is fed both by the market pressure of suppliers and by the administrative pressure of efficiency. Aiki neither denies this inclination nor forces it. It is redirected through a design in which the operational level can render its judgement before the supplier offers its solution. That is uncomfortable for whoever takes the decision, since it lengthens decision-making and complicates announceability. In the medium term, it produces organisations capable of carrying their AI implementations, rather than AI implementations carrying their organisations.

The open question

Heijne closes his essay in a sombre tone. The Economist, he writes, now calls for strict regulation after years of placing the free market above governmental interference. It is rather late, but better late than never. Mensvoort closes his in a playful tone: the great game has at last begun. For the Dutch executive apparatus, the heart of the matter lies elsewhere than at either of these poles. The question is not primarily whether the regulation of AI in its European form is adequate, nor primarily whether the utopian perspective of homo ludens is within reach. The question is whether a dissociated apparatus can adopt a new instrument wisely, or whether it will reproduce its own dissociation through that instrument.

What I have learned in my assignments is that the answer differs by organisation and that the difference is not predetermined. A municipal secretary who brings the operational level to the table when selecting an AI tool, a director who makes the assumptions of the instrument explicit, an audit office that bases its testing on output rather than on process: together they produce an implementation in which the instrument does its work without depriving the organisation of hers. A municipal secretary who keeps the operational level out, a director who lets the assumptions slide, an audit office that bases its testing on process: together they produce an implementation in which the instrument makes itself indispensable and the organisation binds itself to it. The difference between the two is small at the moment of decision, and large in the medium term.

Heijne’s closing line fits this conclusion. It is just the human at the controls. In Dutch public administration that human is not anonymous. Anyone who has read this paper knows which positions I mean. This paper is written for the people in those positions. For anyone wishing to ensure that the adoption of AI does not extend the cycle described in earlier Statecraft papers, the work does not lie in a new governance document, and it does not lie in choosing between Mensvoort’s utopia and Heijne’s warning. It lies in the page that was missing between the two in the newspaper of 25 April 2026, and that plays out daily in a hundred places across the Dutch executive apparatus. It lies in the next time an AI decision arrives on the desk, and the question of who is at the table before the decision falls.


Footnotes

¹ Bas Heijne, “We zijn verwikkeld in een oorlog van allen tegen allen” [We are caught in a war of all against all], NRC, opinion piece, 25 April 2026. For Bostrom’s original work see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014. The Future of Humanity Institute at the University of Oxford, of which Bostrom was director, was dissolved in April 2024.

² Quinn Slobodian and Ben Tarnoff, Muskisme: een gids voor de verbijsterden [Muskism: a guide for the bewildered], 2026, cited in Heijne (2026). For an extended discussion see the interview with the authors in NRC of the same period.

³ Koert van Mensvoort, “In 2030 maakt AI ruimte voor de spelende mens” [In 2030 AI makes room for the playing human], NRC, opinion piece in the series Wat als het góéd gaat met AI in 2030? [What if it goes well with AI in 2030?], 25 April 2026. The piece appears in the same edition of NRC as Heijne’s essay (see note 1), on the page preceding it. Mensvoort is director of the Next Nature Museum in Eindhoven.

⁴ Netherlands Court of Audit (Algemene Rekenkamer), Focus op AI bij de rijksoverheid [Focus on AI in Central Government], 16 October 2024. Online: https://www.rekenkamer.nl/publicaties/rapporten/2024/10/16/focus-op-ai-bij-de-rijksoverheid. For the State Secretary of the Interior’s response see the accompanying letter of the same date.

⁵ Netherlands Court of Audit, Resultaten verantwoordingsonderzoek 2024 ministerie van Financiën [Audit Findings 2024, Ministry of Finance], May 2025, in particular the paragraphs on the algorithms of the Tax and Customs Administration and the Benefits Agency. The 2024 audit of the Ministry of Social Affairs and Employment covers the UWV algorithm.

⁶ Aleid Wolfsen, “Discriminerende algoritmes worden bij vrijwel elke overheid aangetroffen” [Discriminating algorithms are encountered at virtually every government body], de Volkskrant, opinion piece, July 2024. For Ewout Irrgang’s response on behalf of the Court of Audit see the Court’s blog, Als het aankomt op algoritmen, zijn de problemen van het Rijk breder dan alleen discriminatie [When it comes to algorithms, the problems of central government are broader than discrimination alone], 1 August 2024. For the work of Joris Krijger see his doctoral dissertation and publications at the AI & Ethics research group at Rotterdam University of Applied Sciences.

⁷ For the four symptom papers and the closing synthesis, see the Statecraft series Gedissocieerde Organisaties [Dissociated Organisations] (April-May 2026): De reputatie-architectuur [The reputation architecture], De reproductie naar binnen [Reproduction inwards], De opgenomen schuld zonder integratie [The absorbed debt without integration], De performatieve volwassenheid [Performative maturity], and Synthese: het herstel van inhoudelijk gewicht [Synthesis: the restoration of substantive weight].

⁸ For the algorithmic dimension of the childcare benefits scandal and the role of the Fraud Signalling Facility (Fraude Signalering Voorziening, FSV), see among others the report of the Parliamentary Inquiry Committee Childcare Benefits, Ongekend onrecht [Unprecedented Injustice], December 2020, and subsequent publications by the Dutch Data Protection Authority on ethnicity as a risk indicator (2020-2022).

⁹ For the development of performative maturity as institutionalised inversion see Jacob Huibers, De performatieve volwassenheid: Waarom meer code, meer toezicht en meer compliance de dissociatie verergeren in plaats van helen [Performative maturity: why more code, more supervision and more compliance worsen dissociation rather than heal it], symptom paper IV in the Dissociated Organisations series, Statecraft, May 2026.

¹⁰ Jacob Huibers, De diplomademocratie van het apparaat: Hoe een sociologische scheidslijn binnen de uitvoering de gedissocieerde organisatie verzwaart [The apparatus’s diploma democracy: how a sociological cleavage within execution compounds the dissociated organisation], Statecraft, May 2026. The paper builds on Mark Bovens and Anchrit Wille, Diplomademocratie: Opleiding als nieuwe scheidslijn [Diploma Democracy: education as the new cleavage], expanded and revised edition, Prometheus, 2026 (first edition Bert Bakker, 2011).

¹¹ Heijne (2026), citing Slobodian and Tarnoff (2026). The cited formulation is that Palantir’s services are offered not as a stepping stone to a better future but as a means of defending oneself against threats in a dangerous world. Joseph Conrad, Heart of Darkness, 1899, is cited by Heijne via Mr Kurtz’s marginal note “Exterminate all the brutes”, as a critique of the hubris of the Enlightenment.

¹² Jacob Huibers, Gedissocieerde organisaties: Waarom evidente fouten niet meer landen, en wat dat van publiek herstel vraagt [Dissociated organisations: why evident errors no longer land, and what that demands of public recovery], Statecraft, April 2026, section “Verbinding als interventie” [Connection as intervention], subsection on direct source access for supervision.

¹³ For the Aiki method as a design principle rather than an intervention technique, see De Richting van de Beweging: Interim-Management in de Publieke Sector [The Direction of Movement: Interim Management in the Public Sector], manuscript in preparation, chapter 10.


Colophon

“The State at the Controls” is a Statecraft publication occasioned by the NRC edition of 25 April 2026, in which adjacent pages carried a hopeful future scenario by Koert van Mensvoort and a troubling analysis by Bas Heijne, both on AI. The paper builds on the closed series Dissociated Organisations (April-May 2026) and on the paper The Apparatus’s Diploma Democracy (May 2026). It is not part of an ongoing series but stands on its own and is related in diagnosis. It arises from the observation that the adoption of AI in the Dutch public sector is currently in its formative phase, and that the choices being made now will prove irreversible over the years.

Statecraft is the platform of Jacob Huibers for strategic reflection on public-sector execution. Its content connects to the forthcoming book The Direction of Movement: Interim Management in the Public Sector (manuscript in preparation).

Responses and counter-arguments via Statecraft.nl.


Jacob Huibers is an interim manager with more than twenty years of experience in the Dutch public sector. He has worked as cluster manager, cluster director, and quartermaster for municipalities of fifty thousand to over two hundred thousand inhabitants, and for regional collaborative bodies.