An in-depth perspective from a domain expert — boardroom intelligence from the practitioners shaping AI strategy.
The Governance Stack: How to Keep Control When No One's at the Wheel
The Spectrum of Autonomy
First, a diagnostic. Not all autonomous organizations are the same. Rikken maps the landscape along two axes.
On the human control spectrum, there are three modes: Human-IN-the-Loop (HITL), where humans make or execute every decision; Human-ON-the-Loop (HOTL), where AI operates freely within defined boundaries and escalates outliers; and Fully Agentic, where AI drives proposals, decisions, and execution end-to-end. Each step along the spectrum trades direct human involvement for speed and scale.
On the organizational axis, autonomous organizations range from DAOs (Decentralized Autonomous Organizations, with human-driven proposals, token voting, and smart contract execution — lots of blockchain, minimal AI) to ZHCs (Zero Human Companies, with AI-driven proposals, decisions, and execution — lots of AI, some blockchain). Most real-world deployments sit in the hybrid middle, combining elements of both.
The critical insight: your position on these axes determines which governance controls are mandatory, not optional.
The Four-Layer Governance Stack
Rikken identifies four control categories that together constitute a complete governance architecture. Strip out any layer, and the system becomes legally and operationally exposed.
Layer 1 — Guardrails (Pre-Action)
Guardrails define the operational envelope before any agent takes action: budget limits, tool permissions, domain scope, escalation thresholds. The key word is dynamic — static guardrails decay. Effective AOs run continuous recalibration loops, adjusting boundaries as business conditions shift. A guardrail set in January for a €50k marketing budget is operationally wrong by April if the product has scaled. Guardrails are not a one-time configuration; they are a living governance instrument.
Layer 2 — Human-IN-the-Loop (During Action)
HITL gates are mandatory when three conditions align: high irreversibility (the action cannot be easily undone), high stakes (material financial, legal, or reputational impact), and high ambiguity (the situation falls outside the AI's trained decision envelope). Signing a multi-year contract, initiating a regulatory filing, or terminating a key vendor relationship — all require a human in the decision chain, period. Speed pressure is not a legitimate override.
Layer 3 — Human-ON-the-Loop (During Action)
HOTL is the default operating mode for well-scoped autonomous work. AI agents execute freely within their guardrail envelope; humans receive exception alerts for outliers. The bandwidth — the range within which the AI operates without escalation — must be explicitly defined and regularly reviewed. HOTL without a well-calibrated bandwidth is not governance; it's optimism.
Layer 4 — Emergency Breaks (Post-Fact)
This layer is non-negotiable regardless of how mature your other controls are. Every autonomous organization needs clearly documented pause mechanisms and kill switches: who can trigger them, under what conditions, and what the recovery protocol looks like. Emergency breaks are the difference between a system failure and a company failure.
The Decision Matrix: When to Apply Which Layer
Governance design is not about maximum control — it is about right-sized control at each decision point. Rikken's framework maps the choice cleanly:
| Phase | Control Layer | Key Criteria |
|---|---|---|
| Pre-Action | Guardrails | Dynamic; recalibrate continuously |
| During Action — High Stakes | HITL | Irreversible + high-impact + ambiguous |
| During Action — Routine | HOTL | Well-scoped, reversible, within bandwidth |
| Post-Fact / Emergency | Emergency Break | Always mandatory — no exceptions |
The practical heuristic: if you would be uncomfortable explaining the AI's decision to your board without a human having reviewed it, that decision needs a HITL gate. If you would be comfortable but want visibility, HOTL with alerts is sufficient.
War Story: When the Guardrails Disappear
Theory meets reality in what Rikken calls the "Company Recovery" problem — a scenario his own research surfaced during live experimentation with the Dutch Zero Human Company project.
The scenario: a crucial database containing all agent guardrail configurations becomes corrupt. Not the operational data — the governance layer itself. The result: the entire ZHC becomes unresponsive. Agents cannot execute because their operational envelope is undefined. The organization effectively freezes.
This is not a business continuity problem in the conventional sense. It is a company recovery scenario — a category that doesn't exist in most enterprise risk frameworks because most enterprises don't run on autonomous agents yet. The lesson is architectural: guardrail configuration is critical infrastructure. It needs redundancy, versioning, and tested recovery procedures with the same rigor applied to production databases.
If your AO can be paralyzed by a single configuration failure, your governance stack is incomplete.
Case Study: What Happens Without the Stack
In November 2024, the U.S. District Court for the Northern District of California issued a ruling that should be required reading for every governance conversation: Samuels v. Lido DAO.
Lido DAO — one of the largest Ethereum staking protocols, managing billions in assets — argued it was exempt from legal liability as "just autonomous software that runs without human management." Judge Vince Chhabria rejected this in unambiguous terms: "Lido's actions are not those of an autonomous software program — they are the actions of an entity run by people."
The court classified Lido as a general partnership under California law. The consequence: individual token holders face potential personal, unlimited liability for the DAO's actions. Participating in governance votes, posting in forums, holding LDO tokens — any of these may be sufficient to trigger partnership liability.
Lido had no legal wrapper, no formal human control structure, no documented governance framework. Without the Governance Stack, autonomous organizations are not legally neutral — they are legally naked. The Lido ruling is not a crypto-industry edge case. It is a preview of how courts will treat any autonomous organization that cannot demonstrate structured human oversight.
Case Study: The C-Suite Approach Failed
In early 2025, KPMG Partner and University of Amsterdam Professor Sander Klous, alongside entrepreneur Nart Wielaard, launched a systematic experiment: can a company function with zero human staff? The project received national coverage on Dutch prime-time current affairs programme Nieuwsuur, and produced findings that directly validate Rikken's framework.
Phase 1 — the organizational chart approach failed. Klous and Wielaard assigned AI agents executive roles: CEO, CFO, communications manager, legal officer. The agents were given broad autonomy, including the authority to decide what business to start. Their first proposal: trade bitcoin. Researchers blocked it. Agents began drifting from instructions, hallucinating, and shutting down unexpectedly. In one incident, the CEO agent independently reached out to its own CLO for a compliance check on its own business plan. The system was ungovernable.
Phase 2 — the pivot. The team scrapped the org-chart model and rebuilt around process granularity. Work processes were mapped in detail; agents were assigned narrow micro-tasks within those processes. The result was an "army of disposable agents" — tight scope, clear guardrails, no executive discretion. Stability returned. Consistency improved markedly.
Klous's conclusion, stated publicly: "If you let AI agents perform human roles one-to-one — such as a CFO — they tend to drift and hallucinate. That is not a foundation for building a solid organization."
The Dutch angle matters. KPMG and the University of Amsterdam in Amsterdam. Rikken at TU Delft. ZeroForce tying it together through the Dutch Zero Human Company initiative. The Netherlands is quietly becoming the global laboratory for autonomous organization governance — producing the frameworks, the experiments, and the legal stress-tests the rest of the world will eventually need.
The Governance Stack: Implementation Checklist
Before your organization extends meaningful autonomy to any AI system, four things must be true:
- Guardrails are documented, versioned, and backed up. Not in someone's head. Not in a single database with no redundancy. Treat guardrail configuration as critical infrastructure.
- HITL gates are mapped to specific decision types. The list of decisions requiring human approval should be explicit, not implied. When in doubt, gate it.
- HOTL bandwidth is calibrated and written down. What range of autonomous action is acceptable without escalation? If you cannot answer this in writing for each agent in your system, HOTL is not governance — it's exposure.
- Emergency breaks are tested. Not documented. Tested. Quarterly at minimum. Who triggers them, how fast, and what the recovery sequence looks like.
The Governance Stack is not a constraint on autonomous organizations. It is what makes them viable. The legal, operational, and strategic risks of autonomous operation without structured governance are not theoretical — they are in the case law, in the failed experiments, and in the corrupted guardrail databases of organizations that learned the hard way.
Olivier Rikken (TU Delft) researches Decentralized Autonomous Organizations and is the academic lead of the Dutch Zero Human Company initiative. This Expert Brief was developed in collaboration with ZeroForce.
Share this brief:
Share on LinkedIn ·
Share on X ·
Send by email
Further Reading
-
Stanford HAI — AI Index Report
↗
Annual comprehensive AI progress & impact index
-
Anthropic Research
↗
Frontier AI safety & capability research
-
MIT Technology Review — AI
↗
Authoritative AI journalism & analysis
How does your organization score on AI autonomy?
The Zero Human Company Score benchmarks your AI readiness against industry peers. Takes 4 minutes. Boardroom-ready output.
Take the ZHC Score →Get every brief in your inbox
Boardroom-grade AI analysis delivered daily — written for corporate decision-makers.
Choose what you receive — all free:
No spam. Change preferences or unsubscribe anytime.