New in 2026: Singapore has announced a National AI Council. IMDA has published the Agentic AI Governance Framework. Find out what this means for your business →
AI governanceauthority architectureenforcement layeragentic AIenterprise AI

Authority Architecture: The Difference Between Governance That Is Assumed and Governance That Is Designed

Arjen Hendrikse ·

Most AI governance failures are not dramatic. There is no rogue model, no adversarial attack, no obvious moment where something went wrong. The failure is quieter than that.

A model produces output. The pipeline accepts it. The system executes. The decision crosses from suggestion to action almost by inertia, and by the time anyone reviews what happened, the action is already taken. No explicit authority was ever granted. The system simply proceeded — because nothing in the architecture required it to stop.

This is the governance failure most enterprises are living with right now, and most of them have not named it.

The problem with governing behaviour

Most AI governance programmes are designed around the question: how does this system behave? They use monitoring dashboards, output logging, bias assessments, and model cards to characterise what the system produces. They test performance against benchmarks. They review incidents after the fact.

This is not governance. It is observation.

Observation tells you what happened. Governance determines what is allowed to happen, and enforces that boundary before execution, not after. The distinction matters because a system that is well-observed and inadequately constrained can still cause serious harm — and the dashboards that show you what happened will not protect you when a regulator or a board asks who authorised the action.

The more productive governance question is not how does this system behave, but who is allowed to make this decision?

Four questions that determine whether authority is real

For any AI system making consequential decisions, four questions determine whether the governance is substantive or cosmetic.

Who is allowed to make this decision? Not who monitors it, not who reviews it after the fact — who is explicitly authorised to approve execution. In most systems, the answer is: the model, by default, because nothing in the architecture requires explicit authorisation before the output becomes action.

Under what conditions? Are there defined boundaries on when autonomous execution is permitted? A payment system might be authorised to execute transactions below a certain value without human review, but not above. An AI agent managing vendor communications might be permitted to respond to routine queries but not to commit to commercial terms. If those conditions are not technically enforced, they are not conditions. They are aspirations.

Inside what boundaries? What are the hard limits on what this system can do, regardless of what the model recommends? Boundaries that exist only in documentation are not boundaries. They are notes about what should happen, which is a different thing.

With what evidence attached? Before a consequential decision executes, what information is bound to that execution event? Not logged afterwards — bound to it, so that the authority trail is auditable and the evidence that justified the decision travels with the decision itself.

If your organisation cannot answer all four questions for each AI system making consequential decisions, you do not have governance. You have a system that has assumed authority because nothing required it to ask for it.

The financial system as a reference architecture

Financial systems solved this problem a long time ago. A payment transaction does not proceed because the payer wants it to proceed. It proceeds because it has passed a sequence of explicitly designed controls: identity and authority verification, constraint enforcement (does this account have the authority to make this transaction?), evidence binding (what is the documented basis for this payment?), and deterministic state transition (the transaction either clears or it does not — there is no probabilistic middle ground).

None of these controls are advisory. They are not dashboards that observe what the payment system does. They are gates that determine what the payment system is allowed to do. If the transaction fails a control, it stops. The default is not execution — it is halt.

This is the architecture AI governance needs to replicate. Not for every output, but for every decision that crosses a risk threshold into genuinely consequential territory.

What this looks like in practice

At Aivance, the deliverable we call the authority architecture map is built around exactly these four questions. For each AI system in scope, it specifies who holds decision authority, under what conditions autonomous execution is permitted, where the technical boundaries are, and what evidence must be attached to a consequential execution event before it clears.

The Suspended Handoff State — the mechanism that halts an agent at a critical risk threshold and requires explicit human ratification before execution resumes — is the practical implementation of this architecture. It is the equivalent of the payment gateway: the default is halt, not proceed. Execution requires explicit authorisation, not just the absence of an objection.

Building this requires understanding how the system is actually architected, not just what the governance documentation says. That is the gap most governance programmes are not equipped to close, because most governance consultants are not working at the architecture level.

The asymmetry that matters

AI models are probabilistic by design. They produce outputs based on patterns in training data, and those outputs are not deterministic. This is a feature, not a flaw — it is what makes the models useful.

But the governance layer that determines what those outputs are allowed to do cannot be probabilistic. Authority either exists or it does not. A boundary either holds or it does not. Human oversight is either technically enforced or it is not.

Models can remain probabilistic. Authority cannot.

The governance programmes that are going to hold up to regulatory scrutiny, board accountability, and real incidents are the ones built around this asymmetry. The ones that are not will continue to produce dashboards, policies, and oversight committees while their systems proceed by inertia into decisions no one explicitly authorised.


Aivance works with CROs, CISOs, and Enterprise Architects deploying autonomous AI in Singapore and Southeast Asia. The free AI Governance Review is a 30-minute session that maps where your systems have assumed authority versus where authority has been explicitly designed. Book a session here.

AH
Arjen Hendrikse
Founder of Aivance Consulting. ISO/IEC 42001:2023 Lead Auditor. Thirty years working at the edge of what technology can do. More about Arjen
This article was drafted with AI assistance (Claude by Anthropic) and reviewed for accuracy by Arjen Hendrikse before publication. AI Use Policy

Put what you just read to work

If this article raised questions about your own governance posture, the AI Governance Review is the right next step. Thirty minutes, free, no pitch deck.

Book Your Free Governance Review