Imagine a world where you don’t browse Amazon or Google Flights. Instead, you simply say to your phone, “Book me a trip to Tokyo for under $2,000, including a hotel near the Shinjuku district.” In seconds, your AI agent compares options, negotiates prices, books the tickets, and pays for them.
This is the promise of Agentic Commerce—a shift from humans using tools to humans delegating decisions.
However, a recent McKinsey analysis, “Navigating Trust and Risk,” highlights a critical bottleneck in this revolution. The technology for autonomous agents is arriving fast, but the trust required to let them spend our money is lagging behind.
For agentic commerce to scale from a novelty to a global economic force, Responsible AI cannot be an afterthought. It must be the foundational infrastructure. Here is why the future of commerce depends on solving the “Trust Equation.”
The New Trust Equation: Who Are We Actually Trusting?
In traditional commerce, the trust equation is linear. When you walk into a store or visit a website, you ask: Do I trust this brand? Do I trust this merchant?
In agentic commerce, that equation becomes abstract and multi-layered. When an AI agent shops on your behalf, you are no longer making the choices. This prompts a profound question: Who do we trust when we aren’t the ones clicking “Buy”?
The stark reality: for many consumers, the answer right now is “no one.”
Consider markets like Germany and Japan, where cash and invoice-based payments still dominate over credit cards due to a cultural preference for control and transparency. If a consumer is hesitant to input their credit card into a static, verifiable website, how likely are they to hand over their wallet and decision-making power to an opaque AI bot?
The “Leap of Faith” Problem Adoption doesn’t follow innovation; it follows comfort. As Roger Roberts, a McKinsey partner, notes, trust is deeply contextual. What feels intuitive in Silicon Valley might be unthinkable in São Paulo. To bridge this gap, technologists cannot rely on legal disclaimers. They must build agents that engage in ongoing dialogue, allowing users to define boundaries—asking, “How is my data being used?” and “Why did you make this choice?”
The Three Pillars of Risk in an Agentic World
If trust is the engine of agentic commerce, risk is the brake. Traditional compliance frameworks are ill-equipped to handle agents that operate autonomously across borders and systems. The analysis identifies three specific areas where risk is evolving:
1. Systemic Risk: The Snowball Effect
An AI agent is more than an interface; it is a decision-maker. When you connect millions of decision-makers, you introduce Systemic Risk.
Imagine a single faulty prompt or a hallucination in a travel booking agent. In isolation, it books one wrong flight. But at scale, interconnected agents could trigger a cascade of unintended consequences—over-ordering inventory, crashing a reservation system, or executing thousands of purchases without consent.
The Fix: We need “circuit breakers.” Agents must be designed with resilience in mind. Can they fail gracefully? Can they backtrack? Reversing a bad AI decision is far more complex than returning a pair of shoes.
2. The Accountability Vacuum
We are entering a legal gray zone. If an AI agent books a non-refundable trip that gets canceled, or buys a product that causes harm, who is responsible?
- The platform that built the model?
- The brand that deployed the agent?
- The user who authorized it?
Currently, there is no global consensus. While the EU’s AI Act is beginning to provide clarity, we are largely navigating a liability vacuum. Until frameworks like KYA (Know Your Agent) become standard, businesses may need to over-disclose and limit autonomy to protect themselves from reputational damage.
3. Data Sovereignty and Geopolitics
Agents run on data, and data has borders.
- If a US-based agent processes the personal preferences of a French citizen, is it GDPR compliant?
- If an agent is trained on global data but acts locally in India, does it violate data localization laws?
We are seeing a trend toward “AI Sovereignty.” Countries are drawing firm lines regarding where data lives and how it is used. The future of agentic commerce may not be a single global brain, but a network of localized models (e.g., OpenAI’s “for countries” approach) that respect regional laws and cultural norms.
The TRiSM Stack: Building the Architecture of Trust
To solve these problems, organizations must move beyond “Trust me, I’m an AI.” They need to implement what the industry calls the TRiSM Stack (Trust, Risk, and Security Management).
This involves five dimensions of trust that must be engineered into the agent:
- Know Your Agent (KYA): Verifying the identity of the agent, similar to KYC in banking.
- Human-Centricity: Ensuring the agent can explain its reasoning and allows for human override.
- Transparency: Clear disclosure of how decisions are made.
- Security: End-to-end encryption and data minimization.
- Governance: Defined accountability for errors and adherence to regulations.
Conclusion: Risk is the Companion of Opportunity
The rise of agentic commerce introduces a new kind of risk: The Risk of the Unknown. As agents learn to improvise and chain actions together, they will exhibit emergent behaviors that we cannot fully predict today.
But risk is not the opposite of opportunity. It is the price of admission.
The winners in this new era will not be the companies that build the smartest agents, but the ones that build the safest agents. By prioritizing Responsible AI, transparency, and consumer control, businesses can turn trust from a bottleneck into their ultimate competitive advantage.
In the end, we won’t adopt agentic commerce because it is faster or cheaper. We will adopt it only when we trust the machine as much as we trust ourselves.




