New in 2026: Singapore has announced a National AI Council. IMDA has published the Agentic AI Governance Framework. Find out what this means for your business →
IMDA FrameworkAI governanceSingaporecompliance

The IMDA AI Governance Framework: What Businesses in Singapore Actually Need to Know

Arjen Hendrikse ·

Most businesses in Singapore that use AI have heard of the IMDA Model AI Governance Framework. Most of them have not actually read it. And of those that have read it, a smaller number still have genuinely translated its four governance areas into operational practice.

This article explains what the framework requires, what the 2024 and 2026 updates added, and what genuine compliance looks like — not just the words in the policy document, but the evidence a regulator would expect to see.

What the framework is, and what it is not

The IMDA Model AI Governance Framework is principles-based. It sets out what good AI governance looks like without mandating a specific set of procedures for every organisation to follow. This is deliberate. A prescriptive framework that tells every business exactly what to do would quickly become outdated as AI technology evolves, and would not account for the significant variation in how different organisations use AI.

Principles-based does not mean toothless. It means that regulators assess your intent, your effort, and your outcomes — not just whether you have ticked the right boxes. An organisation with a well-written governance policy that is never actually followed is not compliant, even if the policy exists. An organisation with less polished documentation but genuine operational governance is in a better position.

The framework was first published in 2019 and has been updated several times since. The 2024 version addressed generative AI, adding guidance on transparency, human oversight, and data governance for large language model deployments. The 2026 update introduced a dedicated framework for agentic AI, which is discussed below.

The four governance areas

The framework organises its requirements into four governance areas.

1. Human oversight and accountability

This is the most frequently misunderstood area. “Human in the loop” has become a phrase that organisations use to check a box without examining what it actually means.

Genuine human oversight means that the humans reviewing AI outputs have the information, the authority, and the interface quality to actually make a decision. If a reviewer processes two hundred AI-generated risk assessments per day, each in forty-five seconds, the oversight is nominal. The human is present but not genuinely reviewing.

The framework expects organisations to assess whether their oversight mechanisms are effective, not just whether they exist.

2. Operations management

AI systems need to be monitored after they are deployed. Most organisations do rigorous testing before launch and minimal monitoring afterward. The framework requires documented monitoring processes, defined thresholds for when human intervention is triggered, and an audit trail that demonstrates the system is performing as intended.

This means operational infrastructure: logging, alerting, a process for reviewing flags, and a defined escalation path. If your AI system is running without any mechanism to detect when its outputs have changed in quality or character, you do not have operations management governance.

3. Stakeholder communication and education

Transparency about AI is not just a values statement. The framework treats it as an operational requirement. Customers who interact with AI systems should know they are interacting with AI. Employees whose work is assessed by AI systems should understand how those assessments are made.

This dimension also covers internal education. Boards and senior management are expected to have sufficient understanding of AI to provide meaningful oversight. “We trust our technical team to handle it” is not a governance posture that satisfies this requirement.

4. Organisational governance and leadership

AI risk must be managed at the board level, not just the technical level. The framework expects organisations to have defined AI risk ownership at senior levels, board-level visibility of AI risk, and a governance structure that connects operational AI management to strategic leadership.

For most mid-market companies, this is the dimension that requires the most work. It means creating governance structures that did not exist before, rather than documenting practices that already happen informally.

The 2026 addition: governance for agentic AI

IMDA published a dedicated framework for agentic AI in 2026. Agentic AI refers to systems that take autonomous actions: browsing the web, executing code, sending communications, managing files, or interacting with external systems on behalf of a user or organisation.

The agentic AI framework adds four specific governance requirements on top of the core framework:

Safe operating parameters. Agents must have defined boundaries on what actions they can take. These boundaries must be documented, reviewed, and updated as the agent’s capabilities change.

External system interaction governance. When an agent integrates with external systems or data sources, there are specific accountability questions about what data is shared, on what basis, and how it is tracked.

Meaningful human accountability. IMDA is explicit that humans remain ultimately accountable for AI agent actions. This requires governance structures that define when human approval is required before an agent acts, and what information the human needs to give informed approval.

Monitoring and correction. Agents must be monitored continuously. There must be a process for detecting when an agent behaves unexpectedly and for correcting its behaviour before harm occurs.

What genuine compliance looks like

Genuine compliance is an operational posture, not a document. It looks like:

An AI system inventory that is current and reviewed at defined intervals. Not a spreadsheet created eighteen months ago and never updated.

Monitoring reports that are actually read. Not log files that exist but are only consulted when something breaks.

Human reviewers who have been trained to identify edge cases and who have a process for flagging concerns. Not reviewers who approve outputs because the system has never flagged a problem.

Board reporting that reflects the actual AI risk posture. Not reporting that describes governance frameworks in general terms without connecting them to specific systems and specific risks.

The direction of travel

Singapore’s National AI Council, chaired by Prime Minister Lawrence Wong, is in the process of defining clear rules for AI development and use. The direction of travel is toward more specific, more enforceable obligations rather than less. Organisations that treat the IMDA Framework as a principles document they broadly aspire to are running a governance risk that is growing, not shrinking.

The practical question is not whether compliance will eventually be required. It is whether you have the baseline in place when the requirements are formalised.


If reading this has raised questions about your own governance posture, the AI Governance Review is the right next step. Thirty minutes, no pitch deck, a clear picture of where you stand.

AH
Arjen Hendrikse
Founder of Aivance Consulting. ISO/IEC 42001:2023 Lead Auditor. Thirty years working at the edge of what technology can do. More about Arjen
This article was drafted with AI assistance (Claude by Anthropic) and reviewed for accuracy by Arjen Hendrikse before publication. AI Use Policy

Put what you just read to work

If this article raised questions about your own governance posture, the AI Governance Review is the right next step. Thirty minutes, free, no pitch deck.

Book Your Free Governance Review