This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 6 minute read

Digital commerce redefined: The growing impact of agentic commerce

2026 is poised to be a pivotal year for agentic commerce – the evolution from e-commerce to "a-commerce" where autonomous AI agents conduct transactions with minimal human input. This shift marks a technological and legal turning point, as AI-driven commerce reshapes competition in the digital economy, introducing new practical and legal challenges. McKinsey forecasts that by 2030, US B2C retail could see up to $1 trillion in orchestrated revenue from agentic commerce, with global projections reaching up to $5 trillion.

From human-initiated to autonomous transactions

Unlike traditional human led e-commerce, agentic commerce introduces AI intermediaries that understand context, interpret intent, and act within defined boundaries. This evolution involves three levels of increasing autonomy: agent-to-site (agents interact directly with merchant platforms), agent-to-agent (agents transact autonomously with other agents), and multi-agent to multi-platform (intermediary systems facilitate interactions).

Agentic systems scale rapidly because they leverage existing e-commerce infrastructure rather than building new rails. Leading AI developers including OpenAI, Google, Microsoft and Perplexity are rolling out agent capabilities that search, compare, and complete purchases on behalf of consumers. Meanwhile, payment networks like Visa's Intelligent Commerce and Mastercard's Agent Pay provide secure payment infrastructure, with tokenisation, agent verification, consent management, and fraud prevention.

Technical infrastructure is rapidly developing through protocols like Anthropic's Model Context Protocol (MCP) which connects agents to tools, Agent-to-Agent Protocol (A2A) which enables interoperability between agents, Agentic Commerce Protocol (ACP) which enables the buyer-agent-merchant interaction, and Google's Agent Payments Protocol (AP2) – an open-source standard supporting credit cards, bank transfers, and even stablecoins.

Newfound systemic risks

Agents introduce systemic risks that traditional generative AI architectures, designed primarily for isolated LLM-centric use cases, were never built to handle. In multi-agent ecosystems, risks can amplify into systemic errors through miscoordination and collusion, potentially destabilising entire workflows. Key risks include:

  • Amplified AI 'black box' problems: such as lack of transparency, observability, traceability and accountability, leading to bias, misinformation and unfair or harmful outcomes.
  • Security vulnerabilities: bad actors can ‘hijack’ agents to instigate unintended or harmful actions; agents also expand the attack surface, complicating identity verification and raising fraud potential.
  • Novel malfunction risks: such as AI systems changing their own goals, corrupting their memory, gradually losing reasoning ability, gaining unauthorised access to resources, performing well on complex tasks while failing at simple ones, and fostering excessive human trust in automation.
  • Fragmented system access: Agents interacting across multiple disconnected platforms, APIs and data silos without unified governance creates blind spots in observability and traceability.
  • Agent sprawl: As agentic AI scales across enterprise, with more teams creating specialised agents for different functions, this proliferation of agents across systems and workflows can exacerbate all of the foregoing risks.

A single malfunction can cascade catastrophically. For example, one agent misinterpreting a promotional message could trigger multiple agents to place large orders expecting non-existent discounts. This could be combined with an attacker embedding malicious instructions causing API key exposure and payment redirection – snowballing into millions in losses before detection.

Disrupting legal frameworks

Traditional regulatory frameworks built for human decision-making struggle with agentic AI autonomy. Critical and novel legal challenges include:

  • Authority to contract: Can AI agents form legally enforceable contracts? What safeguards ensure informed consent? How can merchants prove proper authorisation when an agent initiates payment rather than the cardholder personally? Solutions are emerging - Google's AP2 provides a technical framework through cryptographically signed "Mandates" to record what users authorised.
  • Liability: Who is responsible when an AI agent misinterprets intent or acts outside its scope to make harmful decisions, erroneous purchases, “hallucinate” products, or fail to communicate terms? Product liability (which may involve strict liability for agent conduct), negligence, and breach of contract could all apply, creating a complex liability landscape for developers, deployers, and users.
  • Consumer and merchant protections: What rights exist to reverse agent-initiated transactions? How does agentic collaboration fit within competition law frameworks? How do cooling-off periods apply? How are merchants protected from fraudulent or erroneous agentic transactions? Could commercial terms be considered misleading or unfair if applicable to AI agents?
  • Data privacy: Agents rely on deep behavioural profiling across multiple platforms. How do frameworks like GDPR and U.S. state laws like the California Consumer Privacy Act apply to AI agent processing activities? Establishing valid legal frameworks for agents to process personal data and make purchases without explicit human approval remains critical.
  • Transparency and explainability: As AI agents make decisions with real-world consequences, transparency requirements become more stringent. For example, the EU AI Act and California AI Transparency Act impose transparency obligations that may be difficult to meet within complex agentic systems.
  • Security and fraud prevention: AI agents require deep access to consumer data – financial profiles, transaction history, preferences, and payment information – which heightens exposure to data leaks, misuse, or exploitation. Liability allocation between consumers, merchants, and AI providers remains unsettled.
  • Payments regulation: Providers of AI agents need to be mindful of the regulatory perimeter when they structure their agentic payments products. Payment service providers must find a way to comply with payments rules on e.g. consent and customer authentication. Read more

Regulatory vacuum: navigating uncharted territory

Existing AI-related laws, consumer protection statutes, and sector-specific regulations will apply to agentic AI systems, but it is too early for coherent guidance. Therefore, agentic systems require proactive approaches to accountability, control, and safety oversight.

The regulatory environment governing AI-driven transactions continues to be ambiguous. Organisations must assess how agent-related matters are managed within existing legal frameworks, while proactively preparing for upcoming agentic focussed regulations.

The EU's AI Act represents the most ambitious regulatory response to AI, but predates agentic commerce and lacks provisions for autonomous purchasing agents. Regulators are starting to move to address the gaps - EU antitrust regulators are also increasingly attentive to risks of autonomous collusion in pricing and contractual arrangements. The UK’s Digital Regulation Co-operation Forum, a multi-regulatory initiative, has launched a Call for Views on regulatory challenges specific to agentic AI. 

The US has no specific federal law explicitly addressing agentic AI, though NIST will likely revise the AI Risk Management Framework (a voluntary set of guidelines) to include considerations for agentic AI. There are some state level references, for example the Colorado AI Act and the latest CCPA regulations provide a regulatory framework to govern the adoption and integration of automated decision-making technologies. Consumer protection laws generally prohibit unfair and deceptive behaviours toward consumers.

Internationally, the UNCITRAL Model Law on Automated Contracting provides a framework for recognising contracts formed by automated systems, but questions about enforceability and accountability in AI-driven contracts remain unresolved. Gaps persist regarding identity verification, liability allocation, and dispute resolution mechanisms.

Strategic imperatives and building agentic governance infrastructure

Consumer trust is the linchpin for agentic commerce adoption, with surveys showing most consumers remain wary of AI-led purchases, citing security and privacy concerns. Businesses must leverage experience from GenAI deployments to update risk management strategies. Key steps include:

  • Training data validation: Validate for biases, conduct regular audits, implement bias correction techniques like reweighting and adversarial debiasing and align safeguards with relevant regulatory frameworks.
  • Contractual assignment of liability: Contractual arrangements with AI developers can assign accountability between in-scope and out-of-scope agentic behaviour (which may also be shared).
  • Agentic AI policies: Establish formal policies with classification systems grouping agents by function, each with appropriate oversight. Clearly demarcate agent role and authority to contract, consent requirements, and liability for agent-driven decisions.
  • Governance frameworks: Set clear agent autonomy levels, decision boundaries, prohibited activities, behaviour monitoring, and audit mechanisms. Include thorough documentation and auditing of decision-making processes.
  • Data governance: Strengthen data governance to ensure compliance with privacy laws, implement robust data minimisation, anonymisation, and pseudonymisation practices, and conduct Data Protection Impact Assessments for higher-risk applications.
  • Explainability mechanisms: Establish robust explainability mechanisms to ensure agents’ actions can be understood, traced, and evaluated. Because agents can hallucinate or otherwise misstate their explanations, verification is key.
  • Updated contractual terms: Update contracts and templates to explicitly address agentic transactions, agent–human interaction disclosures, make clear the scope of authority granted to AI agents, and set out the mechanisms for dispute resolution and allocation of liability.
  • Robust user consent architecture: Implement clear and conspicuous opt-out options with easy to understand explanations of the agent’s scope, real-time notifications and confirmations to ensure users can validate the transaction before it is finalised, post-transaction grace periods, and comprehensive audit trails.
  • Continuous monitoring and auditing: Tailor monitoring to specifical capabilities, use cases and real-world context – this may require multiple methodologies. Employ "Know Your Agent" protocols to authenticate and track agent actions alongside standardised integration to reduce fragmented access.

Adapting to the agentic era

Agentic commerce is already reshaping the commercial landscape, challenging long-standing legal concepts of consent, authority, and consumer choice. For online retailers, the imperative is clear: businesses that move early – adopting governance frameworks, engaging with regulators, and adapting to AI-driven distribution – will be best positioned to thrive. 

Building trust through explainability, transparency, and human-in-the-loop safeguards for high-value or risk-prone transactions will be critical. Those who align legal, technical, and strategic capabilities now are best positioned to thrive in the agentic era.

To stay up to date with the latest tech developments - subscribe now!

Tags

ai, consumer protection, fintech