AIUC-1: A New Framework for Agentic AI Governance

What is AIUC-1?
AIUC-1 is the world’s first AI agent standard1, which has been proposed by the Artificial Intelligence Underwriting Company (AIUC). Unlike current cyber security standards, such as SOC 2 and ISO27001, AIUC-1 is built for AI agents. It combines elements from a variety of standards and pieces of legislation to build an active agent standard. By adhering to the policies built into AIUC-1, organisations are able to certify that their deployment of agents is secure, safe, and there is relevant accountability. In the long run, complying with AIUC-1 would additionally give those companies access to insurance on the deployment of their agents, through AIUC.
AIUC itself is a startup backed by Anthropic cofounder Ben Mann and former CISOs from Google Cloud and MongoDB2. The standard it presents is forward thinking, but is not yet supported by popular compliance tools like Vanta nor Drata. Although, we know that it is gaining traction within the industry, having recently discovered at an event hosted by the CISO of Fin (Intercom) that they are working towards complying with AIUC-1.
Why this matters now
It is predicted that by 2028 15% of work decisions will made by AI agents3. Agents, like many new technologies, present a variety of emerging risks. Some of which we have seen before, such as providing new external entry points for possible attackers. However, unlike previous technologies, agents are able to make decisions for themselves. This presents an array of novel risks which are already beginning to present themselves. For example, in a recent survey it was found that 80% of organisations had encountered risky behaviour from AI agents through unintended actions, such as accessing and sharing inappropriate data4.
Current standards aren’t fit for the agent world
Current cyber security frameworks are built for previous technologies and focus on people, processes, and systems. They fail to account for the fact that agents can act independently. Frameworks like ISO 27001 and SOC 2 do not account for the fact that agents are adaptable and can act autonomously. Instead, agents require continuous and active governance, much like humans themselves.
There have been more recent attempts to apply AI specific governance such as ISO 42001 and the EU AI Act. However, neither are agent specific and both have different issues.
ISO 42001 is a standard covering the responsible deployment and development of AI, still focusing on systems and processes. However, it does not show continuous safety and is only renewed and tested once a year.
The EU AI act is regulation that classifies AI systems and defines corresponding obligations. However, it does not show how to conform and in many places the legislation is very vague.
Key principles of AIUC-1
AIUC-1 is built on a variety of different standards and pieces of legislation. It adapts and operationalises these principles and obligations for the agentic world. To name but a few of those regulations - EU AI Act, ISO 42001, NIST AI RMF, and regional pieces of legislation such as the Colorado AI Act5.
AIUC-1 is forward-looking and you must actively monitor your agents in order to adapt to the evolving risk landscape. Currently it requires regular testing (once a quarter) alongside having to be renewed annually.
AIUC-1 is built on 6 key pillars, each with a specific risk prevention purpose:
| llar | Purpose |
|---|---|
| Data & Privacy | Protect against data leakage, ensure confidentiality, control how training data is used. |
| Security | Guard against adversarial attacks (e.g. prompt injection, jailbreaks), enforce system integrity. |
| Safety | Prevent harmful outputs and actions, ensure fail-safe mechanisms, and relevant safeguards. |
| Reliability | Ensure predictable behaviour, proper error handling, robustness under stress. |
| Accountability | Audit trails, human oversight, and clear responsibility for agent decisions. |
| Society | Align agent behaviour with societal goals, ethics, regulatory compliance, and prevent misuse. |
In order to qualify for AIUC-1 a company must adopt 40 technical, operational and legal safeguards with 12 more optional across all of the 6 key pillars.
Turning AIUC-1 into reality with Handlebar
Handlebar allows you to deterministically enforce rules before an agent acts and gives you clear auditable logs of their actions. This means that you can comply with AIUC-1 and deploy your agents with trust and visibility, knowing that they are being continuously monitored.
For example, our new SDK allows you to apply user roles and limit an agent's access to data, helping you to comply with Article A003 of AIUC-1. Check out the open-source SDK and configure it on your own agents, or read more about our launch here.
Footnotes
Footnotes
-
Backers of AIUC - Fortune - Exclusive: Who covers the damage when an AI agent goes rogue? ↩
-
Percentage of work decisions made by Agentic AI - Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 ↩
-
Risky behaviour of agents - Sailpoint Report - AI agents: The new attack surface ↩
-
Pieces of Legislation that AIUC-1 is based on - https://aiuc-1.com/crosswalks ↩
