Singapore has established a National AI Council. IMDA has published the Agentic AI Governance Framework. Find out what this means for your business →
AI Governanceagentic AIexecution layerSingaporeenterprise

Governing the AI Execution Layer

Arjen Hendrikse ·

For most of the generative AI era, enterprise governance focused on the model output layer: was the response accurate, unbiased, and compliant? That conversation remains important. But it is no longer sufficient.

A new governance frontier has opened at the execution layer: the point where AI agents stop generating text and start taking actions. Agents now write to databases, call external APIs, send communications on your behalf, initiate financial transactions, and coordinate with other agents, often without a human reviewing each step.

The governance question has shifted from whether a model output was appropriate to whether a specific action was authorised, traceable, and reversible.

“Is this specific action authorised, under the current identity, within current approval state, data boundaries, and budget constraints?”

This is the execution layer. And most enterprise AI governance frameworks have significant gaps there.

The numbers tell a clear story

97%
of organisations that experienced an AI model breach in 2025 lacked proper AI access controls (IBM Cost of a Data Breach Report 2025)
86–89%
of enterprise AI agent pilots have not reached production at scale (Gartner 2026)
14%
of Singapore leaders report a mature model for agentic AI governance (Deloitte 2026)

These are not model quality problems. They are execution layer problems: organisations deploying agents without clear answers to who authorised what, what those agents can access, and what happens when something goes wrong.

Why the execution layer is fundamentally different

Three structural factors make execution layer governance categorically harder than output layer governance.

First, actions are often irreversible. A mistakenly sent email, a deleted record, a financial transfer cannot be undone the way a bad text response can be ignored. The stakes of execution errors are higher by an order of magnitude.

Second, agents operate across trust boundaries. An agent invoked by an internal user may call a third-party API, access a cloud storage bucket, and hand output to another agent built by a different team. Each boundary crossing is a potential governance failure point.

Third, multi-agent pipelines amplify risk. A compromise or error in one agent can propagate across an entire pipeline before any human observer notices. OWASP’s Top 10 for Agentic Applications, published in December 2025, explicitly identifies cascading failures as one of the ten critical risks for autonomous AI systems.

Three schools of thought from the major vendors

Anthropic, IBM, Microsoft, Salesforce, Google, and OpenAI have each moved to address execution layer governance. Their approaches reflect three distinct theories of where the control point should live.

ApproachVendorsCore mechanism
Model-level governanceOpenAIModel Spec Chain of Command: the model itself is trained to respect authority hierarchies and apply caution at irreversible decision points
Protocol and runtime enforcementAnthropic, MicrosoftMCP as the governed interface between agents and external systems; Agent Governance Toolkit intercepting every action pre-execution at sub-millisecond latency
Platform governanceIBM, Salesforcewatsonx.governance and Einstein Trust Layer as enterprise platforms enforcing consistent policy across AI deployments regardless of underlying model

Full report

Get the full Aivance analysis: vendor strategies, the OWASP Agentic Top 10, and the five-decision governance vocabulary.

No newsletter. No follow-up sequence. Just the report.

None of these approaches are wrong. But they share a common characteristic: they are designed for organisations with dedicated engineering teams, major cloud contracts, and the capacity to implement complex governance infrastructure. The mid-market enterprise that has licensed a vertical AI agent solution from a third party and does not have a clear view of what data it accesses or what actions it can take is not the target customer for any of these tools.

Singapore has set a global benchmark

On 22 January 2026, Singapore published the world’s first governance framework specifically designed for agentic AI, developed by IMDA and AISG with contributions from AWS, Google, and Microsoft. It establishes four dimensions every organisation deploying agents should address:

  • Risk bounding — selecting appropriate use cases and placing explicit limits on agent capabilities before deployment begins
  • Human accountability — defining meaningful checkpoints where human approval is required, not as bureaucratic process but as substantive control
  • Technical controls — implementing governance mechanisms across the full agent lifecycle
  • End-user responsibility — ensuring users of agent-assisted systems understand what is automated and what is not

Compliance is voluntary. Legal accountability for agent behaviour is not. The framework sets the standard against which regulators and courts will evaluate whether an organisation acted responsibly when something goes wrong.

With the EU AI Act’s high-risk obligations taking effect in August 2026 and OWASP’s Agentic Top 10 already being used as an enterprise audit reference, the compliance window is shorter than most organisations realise.

What this means in practice

Bain and Company put the governing principle clearly: governance and trust must precede orchestration and scale. Organisations that build governance after deployment are rebuilding it at significant cost.

The minimum governance vocabulary for any organisation deploying agents right now is five decisions every execution layer policy system must be able to make: allow, deny, require human approval, throttle, or constrain with runtime limits. If you cannot articulate how your current agent deployments handle each of these, you have a governance gap.

The gap between where most organisations are and where the risk requires them to be is measurable. In Singapore, 72 percent of organisations plan to deploy agentic AI within two years. Only 14 percent have mature governance in place. That distance does not close by itself.


The full Aivance analysis covers vendor strategies in depth, the OWASP Agentic Top 10 risk taxonomy, the five-decision governance vocabulary, and strategic priorities for enterprise AI leaders. Request it below.

AH
Arjen Hendrikse
Founder of Aivance Consulting. ISO/IEC 42001:2023 Lead Auditor. Thirty years working at the edge of what technology can do. More about Arjen
This article was drafted with AI assistance and reviewed for accuracy by Arjen Hendrikse before publication. AI Use Policy

Get the full analysis

The complete report covers vendor strategies in depth, the OWASP Agentic Top 10 risk taxonomy, the five-decision governance vocabulary, and strategic priorities for enterprise AI leaders.

Enter your email and we will send it to you directly.

No newsletter. No follow-up sequence. Just the report.