Target and other major retailers are updating terms of service to hold consumers liable for AI shopping errors, setting a controversial precedent for the future of autonomous commerce and AI agents.
In a move that highlights the growing legal complexities of autonomous commerce, retail giant Target has updated its terms of service to shift the financial liability for artificial intelligence shopping errors directly onto consumers. As of April 2026, customers utilizing Target’s upcoming Gemini-powered AI shopping assistant will be held financially responsible for any hallucinations, incorrect substitutions, or unauthorized purchases made by the bot on their behalf. This policy sets a controversial precedent for how major retailers plan to handle the unpredictable nature of agentic AI, forcing consumers to foot the bill for experimental technology.
The shift represents a critical turning point in commercial AI deployment. Retailers are aggressively moving beyond passive recommendation algorithms into the era of agentic AI—systems capable of independently executing multi-step tasks like adding items to a cart, authorizing payments, and finalizing checkouts. Yet, as these systems gain autonomy, corporations are preemptively shielding themselves from the very real risk of algorithmic misfires.
The Fine Print of Agentic Commerce
Target’s updated terms specifically address its forthcoming "Agentic Commerce Agent," a virtual shopping assistant built on Google’s Gemini large language model (LLM). According to the new legal language, any transaction initiated by the AI on a user's behalf will be legally treated as if the user clicked the "buy" button themselves.
"You are responsible for reviewing activity performed by your Agentic Commerce Agent and for promptly notifying the Agentic Commerce Agent and Target of any activity you believe is unauthorized or outside the scope of permissions you approved," the new terms state.
The company goes further to explicitly disclaim the reliability of its own tool, warning users that "Target does not purport to guarantee that an Agentic Commerce Agent will act exactly as you intend in all circumstances."
In practice, this means if a customer asks the AI to "buy a replacement charging cable" and the system hallucinates, erroneously purchasing a $150 premium electronic accessory instead of a $15 generic cable, the customer is liable for the charge. While a Target spokesperson confirmed to industry reporters that standard return policies still apply, the friction, temporary loss of funds, and burden of proof remain entirely on the consumer.
A Broader Industry Trend
Target is not an outlier in this legal maneuvering. The move mirrors similar quiet updates across the retail sector as companies rush to deploy autonomous features without absorbing the associated financial risks.
Walmart recently updated its terms of use for its own AI shopping assistant, Sparky, to cover for generative AI mistakes. The company noted that responses and recommendations generated by the AI may not be accurate, laying the groundwork to distance the corporation from automated purchasing errors.
This trend exposes a fundamental technical tension: retailers are embedding probabilistic systems (LLMs) into deterministic business processes (pricing, inventory, and payments). Industry analysts note that a natural-language prompt like "get the usual snacks for the kids" contains massive inherent ambiguity. When an AI attempts to resolve edge cases regarding dietary restrictions, out-of-stock substitutions, or regional tax rules, it can easily generate an outcome that feels "authorized" by the system's logic but was never intended by the shopper.
The Legal Framework Permitting AI Liability Shifts
Retailers are emboldened to make these changes because existing US contract law provides a surprisingly robust foundation for AI-initiated agreements.
Under the Uniform Electronic Transactions Act (UETA) and the federal E-SIGN Act, contracts formed by "electronic agents" are legally enforceable. As legal experts at Stellagent point out, UETA explicitly states that a contract can be formed "even if no individual was aware of or reviewed the electronic agents' actions."
However, while the law recognizes the validity of an AI's purchase, the allocation of liability when that AI makes a mistake remains a massive gray area. In February 2026, major law firms including Clifford Chance and Torys issued warnings to corporate counsel that the liability gap created by agentic AI is not adequately covered by legacy vendor contracts or consumer protection laws.
While many organizations have successfully deployed ai agents for enterprise customer support automation to handle inquiries, route tickets, and resolve basic disputes, granting these systems the autonomy to execute financial transactions introduces an entirely new risk calculus. In enterprise support, a hallucination might result in a frustrated customer interacting with a chatbot; in agentic commerce, a hallucination directly impacts a user's bank account.
What to Watch Next: The Regulatory Collision Course
The US approach of allowing retailers to dictate AI liability through terms of service updates is likely to face severe regulatory stress-testing, particularly when contrasted with international frameworks.
The 2024 European Union Product Liability Directive reform officially classified AI software as a "product." Under this framework, strict liability is extended to defects arising from a system's post-market self-learning or autonomous actions. If an EU consumer faces financial harm due to an AI shopping agent's hallucination, the retailer and the software provider (such as Google) could face strict liability, regardless of what their terms of service claim.
As agentic commerce scales through the remainder of 2026, consumer protection advocates are expected to push the Federal Trade Commission (FTC) to intervene. Until legal precedents are firmly established in the courts, consumers utilizing AI shopping assistants are operating in a "buyer beware" environment where the convenience of automation comes with the hidden cost of absolute financial liability.
Last reviewed: April 7, 2026



