What is agentic AI?
Agentic AI refers to artificial intelligence systems that can autonomously plan, decide and execute multi-step tasks to achieve defined goals, adapting to new information with minimal human oversight or intervention. Unlike traditional automation or static generative AI chatbots, AI agents can make decisions and orchestrate workflows across multiple steps and multiple systems.
Some use cases we are seeing emerge include:
- autonomous customer support and case resolution agents: where agents are deployed to field initial queries, take follow-up actions such as retrieving account information, escalating issues or updating systems;
- agentic shopping and agentic payments: where agents interact directly with retail platforms on behalf of users to browse, select, and check out purchases in accordance with a user's intent, bypassing the need for humans to visit e-commerce shopfronts entirely; and
- cybersecurity monitoring: where agents monitor environments, detect anomalies and initiate responses without human intervention.
While the opportunities are game-changing, there are heightened and distinct risks in using agentic AI from a legal perspective. One of the key emerging issues is accountability for the agent's actions, particularly where agents are granted system-level permissions or delegated account authorities in order to orchestrate and execute workflows.
Whose agent is it anyway?
One of the key legal issues with using agentic AI tools is accountability.
As the level of human involvement decreases, questions arise as to who is legally responsible for the decisions and actions taken by an autonomous agentic AI tool, especially when the agent makes an error - for example;
- is it the developer of the agentic AI tool,
- the company that has deployed it, or
- the user who has allowed the agentic AI to be used to achieve certain goals?
Australian laws are built around the idea of human agency (or, at least, imputed human agency). This is the idea that:
- legal persons (whether a natural person or an entity with legal personhood, such as a corporation) are responsible for their actions and omissions; and
- in addition, legal persons can, as principal, be bound by the acts of their agents where those agents are acting within their scope of authority.
Against that backdrop, questions arise where the AI agent operates outside of that scope of authority. It becomes even more complicated in the context of multi-agent systems, where there may be a "lead" agent interacting with a number of sub-agents in order to make decisions and orchestrate workflows.
Ultimately, whether an agentic AI's acts can be imputed to a principal will depend on the specific facts.
What this means for business - user beware
Ultimately, if an agentic system executes an action, the courts will look to the person who has developed, deployed or used the agentic AI tool to try to apportion liability.
Liability risk, in turn, is likely to be allocated by contract as between companies (in a B2B context) and consumers (in a B2C context) - mostly likely by non-negotiable terms of use. The analysis will therefore, almost certainly, involve an analysis of the contractual relationship that exists between these 3 key players: the developer, the deployer and the customer.
Before adopting agentic AI, organisations should consider (among many other things):
- who is ultimately bound by the actions of the AI agent? How is this dealt with in contracts with the developer of the tool, the company using the tool and any downstream customers (whether other businesses or consumers)?
- what risk mitigation measures should be put in place from a practical perspective - for example, are there technical rules and guardrails that can be put in place so that an AI agent cannot act outside the scope of its appointment or authority or interact with systems or platforms that have not been pre-approved?
Even then, not all risk can be effectively dealt with by way of contract and practical safeguards - difficult questions of liability will still arise in areas such as the consumer context (in particular, unfair contract terms), negligence, and other areas of law that govern commercial and consumer dealings and that parties cannot neatly contract out of.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.