Who Authorised That? The Case to Know Your Agent in Financial Crime Prevention

When a compliance investigation is triggered today, the first question a senior officer asks is still a familiar one: who initiated this? Increasingly, the honest answer is an AI agent. And the unsettling follow-up is whether that agent has a verifiable identity, a bounded scope of authority, and a traceable decision trail.

The scale of agentic deployment in financial services has accelerated sharply. Gartner estimates that fewer than 5% of enterprise applications featured task-specific AI agents in 2025; by the end of this year, that figure is projected to reach 40%. According to the latest market data, 72% of Global 2000 companies are already operating AI agent systems beyond the experimental phase, with 44% of finance teams expected to adopt agentic AI in 2026 — a growth rate of over 600% from the prior year. These are not passive tools. They ingest documents, screen counterparties, flag suspicious transactions, and draft investigation summaries, often without a human ever reviewing the individual step.

The problem they are deployed to address is formidable. Global illicit financial activity exceeded $4.4 trillion in 2025, according to Nasdaq Verafin’s annual Financial Crime Report, while INTERPOL recorded $442 billion in online fraud losses over the same period. AI-enabled attacks are proving particularly resistant to traditional controls: Chainalysis reported a 1,400% increase in impersonation scams, with AI-assisted attacks 4.5 times more profitable than manual equivalents. The compliance industry has rightly embraced agentic AI as its strongest line of defence. The question now is whether those deployments are themselves governed.

KYA: The Compliance Logic You Already Know, Applied to Machines

Know Your Agent (KYA) extends the established compliance logic of KYC and KYB to AI. Just as KYC establishes who a customer is and whether they pose financial crime risk, KYA verifies an AI agent’s identity, authorisation scope, and ongoing behaviour. Just as KYB requires institutions to understand the structure and ultimate beneficial ownership of a legal entity, KYA demands that compliance teams can account for what an agent can do, who is responsible for its actions, and how those actions can be explained to a regulator.

The regulatory architecture supporting this expectation is solidifying rapidly. Singapore released the world’s first Model AI Governance Framework for agentic AI in January 2026, directly addressing autonomous agent oversight. The EU AI Act comes into full effect for high-risk AI systems — including fraud detection, AML profiling, and automated access controls — on 2 August 2026. On 25 March, the UK’s Financial Conduct Authority signalled it is reviewing whether current payment services rules are adequate for agent-initiated transactions, acknowledging that existing frameworks were not designed with autonomous systems in mind. The US National Institute of Standards and Technology launched a dedicated AI Agent Standards Initiative in February, focused on agent identity and authentication, action logging, and containment boundaries. MAS published an AI Risk Management Toolkit earlier this month, developed with 24 financial institutions and explicitly addressing agentic AI risks, including prompt injection.

The Risks No Governance Framework Can Ignore

OWASP released its Top 10 for Agentic Applications in February 2026, developed by over 100 security researchers and peer-reviewed by NIST and the European Commission. Prompt injection — in which malicious inputs manipulate an agent into taking unintended actions — is identified as the most prevalent AI exploit in production environments. One documented financial services incident involved a reconciliation agent manipulated into exporting a customer dataset matching a query that happened to match every record in the database. The speed of agentic AI makes this category of attack particularly dangerous: the same Anthropic disclosure that described the first AI-orchestrated cyberattack noted the system was making thousands of requests per second, a pace impossible for human actors to replicate or intercept in real time.

Over-authorisation represents a parallel structural risk. AI agents, unlike human employees, do not recognise when a request falls outside their mandate unless explicit permission boundaries have been defined and enforced. An AML investigation agent with broad read-write access to core banking records is not an edge case — it is the default when agent deployments outpace governance programmes. Model drift compounds both risks by operating silently: a transaction monitoring agent that begins to miss suspicious patterns continues to process thousands of cases daily without triggering any alert, accumulating undetected compliance exposure with every cycle. The EU AI Act, NIST AI RMF, and the new MAS guidelines all explicitly require ongoing post-deployment performance monitoring for this reason.

What Sound KYA Governance Looks Like in Practice

Effective KYA rests on four interlocking principles that compliance leaders will recognise from their existing frameworks. Verifiable agent identity requires that every agent operating in a regulated workflow carries a cryptographically-signed credential tied to a known developer identity, a defined capability scope, and a named institutional sponsor. Least-privilege authorisation ensures that a KYC intake agent holds no access to transaction monitoring records, and a sanctions screening agent cannot write to customer profiles. Explainability at decision point means agents must log not only what they decided but why — a requirement now explicit in FCA and OCC guidance. And human oversight at high-stakes junctures reserves final authority on suspicious activity report recommendations, sanctions match confirmations, and customer approval decisions for human compliance officers. Speed and scale are the advantages agentic AI delivers; accountability is what keeps those advantages on the right side of regulators.

The Market Is Already Moving

Commercial deployment of KYA is no longer theoretical. Sumsub launched what it describes as the industry’s first AI Agent Verification solution in January 2026, introducing “Agent-to-Human Binding” — a mechanism that cryptographically links each agent to a verified human identity at moments of highest compliance risk, including onboarding, account control changes, and high-value payouts. In February, Bretton AI — the agentic compliance platform formerly known as Greenlite AI — closed a $75 million Series B backed by Sapphire Ventures, Greylock, and Thomson Reuters Ventures, positioning its proprietary “Trust Infrastructure” as the layer that makes AI-driven KYC and AML investigations audit-ready and explainable. These are not pilot-stage experiments. They reflect institutional conviction that governance-first agentic AI is the only commercially viable model in a regulated environment, and that the institutions which embed KYA controls early will deploy at greater scale and confidence than those which retrofit governance after the fact.

Three Things Compliance Leaders Should Do Now

For compliance leaders preparing for the second half of 2026, three actions are most urgent. First, audit the agents already operating in your compliance stack — including those supplied by vendors — and ask whether each holds a verifiable identity, operates within documented permission boundaries, and generates decision logs adequate for regulatory scrutiny. Second, apply established model risk management principles to your agent portfolio: the validation, drift monitoring, and change management disciplines that govern quantitative models under SR 11-7 apply equally to AI agents making KYC and AML decisions. Third, designate explicit human approval checkpoints for decisions carrying the highest regulatory consequence. Institutions that move early on KYA will deploy agentic AI with greater confidence and broader mandate. Those that wait risk a governance gap that becomes significantly more difficult — and more expensive — to close under regulatory pressure.

About WIDTH

WIDTH is an AI-native unified compliance platform dedicated to helping global regulated industries complete compliance work in a more efficient, auditable, and scalable way. By integrating intelligent workflows, risk automation, and audit-grade execution capabilities, WIDTH enables institutions to achieve both greater efficiency and greater trust in an evolving regulatory environment.

Learn more at width.com →

Back
One
AI-Native Platform
for Auditable
and Automated Compliance
Platform
WIDTH
Compliance
AI-NativeOnboardingAML MonitoringFraud DetectionCase Management
Industry
Bank & FintechsDigital AssetsNon-Financial Businesses
Developer
Coming soon
Resources
Blog
Company
About
© 2026 WIDTH Pte. Ltd.