Singapore has established a National AI Council. IMDA has published the Agentic AI Governance Framework. Find out what this means for your business →
agentic AIAI governanceenterprise AISingaporeSEA

Agentwashing in Southeast Asia: What Vendors Are Selling and What You Are Buying

Arjen Hendrikse · · 5 min read

Gartner has a word for a pattern that is spreading quickly through enterprise AI sales cycles: agentwashing. It describes the gap between what vendors label as AI agents and what those systems actually do. Understanding that gap is one of the more useful things a leadership team in Southeast Asia can do before they commit budget to agentic AI.

What an AI agent actually is

The simplest honest definition: an AI agent is software that completes a task on your behalf, reading information, following defined rules, connecting to business systems, and taking set actions within agreed boundaries.

The operative word is actions. A language model produces text for a human to review. An AI agent acts: it sends a communication, executes a transaction, queries a live system, triggers a downstream workflow. The distinction carries real weight, because the governance requirements for systems that act are fundamentally different from those for systems that produce recommendations.

A useful frame is the capable intern analogy. Most AI agents today are less like a digital employee making independent judgements and more like a capable intern with a detailed checklist: fast, useful, and good at following instructions, but still requiring clear guidance, defined boundaries, and human oversight at consequential decision points. A smaller number of more advanced systems can reason across multiple steps and act with greater independence. Most of what is marketed as AI agent technology today is closer to structured software following defined steps.

The distinction matters because the intern and the autonomous decision-maker require very different governance structures. One needs a process and a checklist. The other needs authority architecture, override mechanisms, and board-reportable oversight. Treating them the same way is how organisations end up with governance that is either excessive for simple automation or dangerously insufficient for genuinely autonomous systems.

What agentwashing looks like in practice

In a sales context, agentwashing typically takes one of three forms.

The first is relabelling existing automation. A workflow tool that executes a predefined sequence of steps gets rebranded as an AI agent because it uses a language model to parse an input. The core logic is deterministic. The governance requirements are minimal. But the marketing positions it alongside genuinely autonomous systems that require substantive oversight.

The second is capability overstating. A system that can autonomously complete a task in a demo environment, using prepared data and a constrained set of inputs, gets sold as production-ready agentic AI. In a real enterprise environment, with live data, edge cases, and integrations to existing systems, the performance degrades significantly. The demo worked. The production deployment does not.

The third is governance underplaying. The vendor’s materials focus extensively on what the agent can do and very little on what governance infrastructure is required to deploy it safely. Questions about override mechanisms, audit trails, permission scoping, and regulatory compliance get deferred to implementation. By the time the organisation asks those questions seriously, the procurement decision is already made.

None of this is unique to Southeast Asia. But the region has a specific vulnerability. AI adoption across APAC is running ahead of governance maturity on most measures. Organisations under pressure to demonstrate AI progress are more likely to accept vendor framings uncritically when the category is new and the internal expertise to evaluate it is still being built.

The three questions that cut through it

Before treating a vendor’s product as a genuine AI agent requiring enterprise governance, three questions clarify what you are actually dealing with.

What actions can this system take, and in which systems? If the answer is “it generates recommendations for a human to act on,” the governance requirements for that system are closer to those for a language model. If the answer involves triggering workflows, writing to databases, sending communications, or executing transactions, the governance requirements are substantively different. Get a specific answer, not a marketing summary.

What happens when it encounters something outside its training or instructions? An autonomous system that behaves unpredictably at the edges of its task definition is a genuine operational risk. Ask for documented evidence of how the system handles ambiguous or out-of-scope inputs: does it halt and escalate to a human, or does it proceed? The answer to this question tells you more about the governance maturity of the product than anything in the sales materials.

What does human oversight actually look like in production? Not in the demo. Not in the vendor’s reference architecture. In your production environment, with your data, your team, and your regulatory obligations. Specifically: at what decision points does a human see what the agent is doing before it acts, and what is the mechanism for stopping it if the action is wrong? If the vendor cannot answer this specifically, the governance of the system has not been designed. It has been assumed.

Why this matters for regulated industries in SEA

For enterprises in financial services, insurance, healthcare, and government-adjacent sectors across Southeast Asia, the agentwashing risk goes beyond the commercial. It is regulatory.

MAS has begun explicitly addressing model risk and human oversight in its proposed AI Risk Governance guidelines. IMDA published a specific framework for agentic AI governance in 2026. The pattern in regional regulation is moving from voluntary guidance toward enforceable expectations. An organisation that has deployed something marketed as an AI agent, without the governance infrastructure appropriate to what the system actually does, will face those questions under conditions that are less comfortable than a sales cycle.

The practical protection is knowing exactly what you have. Not what the vendor called it. What it does, what it can access, and who holds the authority to stop it.

That is the question worth asking before the procurement decision, not after the deployment.


Aivance works with enterprises in Singapore and Southeast Asia deploying autonomous AI. If you are assessing a vendor’s agentic AI claims or trying to understand what governance your current deployments require, the AI Governance Review is the right starting point. Book a 30-minute session here.


This article was drafted with AI assistance and reviewed for accuracy by Arjen Hendrikse before publication. AI Use Policy

AH
Arjen Hendrikse
Founder of Aivance Consulting. ISO/IEC 42001:2023 Lead Auditor. Thirty years working at the edge of what technology can do. More about Arjen
This article was drafted with AI assistance and reviewed for accuracy by Arjen Hendrikse before publication. AI Use Policy

Put what you just read to work

If this article raised questions about your own governance posture, the AI Governance Review is the right next step. Thirty minutes, free.

Book Your Free Governance Review