Notes by Rajeev Goswami

Insights on AI, Business Travel & Leadership

Convenience or New Security Nightmare?


Personal AI agents are rapidly moving from experimentation to everyday reality. They promise to handle routine digital tasks—shopping, travel bookings, expense filing, and entertainment reservations—on our behalf. For travelers and travel managers alike, this could remove enormous friction from planning and managing trips.

What feels different this time is not just smarter software, but software that can act—often without waiting for us to intervene.

But as these agents gain autonomy, they also introduce new risks around security, identity, and legal accountability—risks the travel industry must address sooner rather than later.

The deeper question isn’t whether personal AI agents can book travel, but how much autonomy we are comfortable delegating in AI-driven business travel decisions.


The Promise of Personal AI Agents in Travel

Personal AI agents are designed to process information, make decisions, and take actions with minimal human involvement. In travel, this can mean:

  • Comparing fares and hotel options
  • Booking policy-compliant trips
  • Managing loyalty accounts and preferences
  • Arranging ground transport
  • Monitoring disruptions and rebooking automatically
  • Submitting expenses after the trip

For corporate travelers, this translates into less time spent navigating portals and policies. For travel managers, it offers more consistent compliance and better data. For TMCs and suppliers, well-designed agents can improve conversion by presenting the right option at the right time and reducing abandoned bookings.

Anyone who has tried to rebook a disrupted trip late at night will immediately see the appeal.

This is often most visible in real-world travel scenarios, where itineraries change, exceptions arise, and decisions must be revisited carefully rather than executed blindly.

In short, personal agents could finally make “managed travel” feel effortless for the end user.


Security and Identity Risks: When Convenience Becomes Exposure

The very capabilities that make AI agents powerful also magnify traditional cyber risks.

A personal agent may hold access to:

  • Email and calendars
  • Travel profiles and loyalty accounts
  • Corporate booking tools
  • Payment methods and virtual cards

If compromised, a single agent could trigger widespread fraud, unauthorized bookings, data leaks, or even impersonation at scale.

Trust breaks down quickly when systems are allowed to act without strong controls—especially once payments and financial authorisations are involved.

 In a travel context, that risk is amplified because bookings, changes, and payments often occur under time pressure.

Security experts already warn that advanced AI can convincingly mimic human voice and text. Combined with autonomous agents, this creates new opportunities for identity theft, social engineering, and account takeovers—especially in travel, where urgency and disruption are common.

When an AI agent acts independently and causes harm, a difficult question arises: who is responsible—the traveler, the employer, the agent provider, or the platform that accepted the transaction?


Legal and Ethical Gray Areas

Today’s laws were written for humans and traditional software, not autonomous agents.

Courts are increasingly applying existing computer-fraud and unauthorized-access laws to AI agents that interact with websites in ways that violate terms of service. This raises critical questions:

  • When does automated access become illegal?
  • What constitutes valid consent when an agent acts on someone’s behalf?
  • How should personal and behavioral data be protected when agents operate continuously and at scale?

 For travel managers, this is not an abstract legal debate—it directly affects how booking tools, aggregators, and agents can safely interact.

Ethically, opaque “black box” agents also create risks of bias, manipulation, and undisclosed commercial influence. In travel, rankings and recommendations shape spending, safety decisions, and access to fares.

 If an agent nudges a traveler toward one option over another, it matters who benefits—and whether the traveler even knows that choice is being made.


Why the Travel Industry Needs “Agent Access Standards”

The recent Amazon–Perplexity dispute highlights growing discomfort among platforms with AI agents that behave like humans without clearly identifying themselves. Many observers see this as one of the first major legal stress tests for agentic AI in commerce.

Travel is unlikely to be far behind. Relying on lawsuits alone is not a sustainable path forward.

What’s needed is an industry-wide “Agent Access Standard”—a shared framework that defines how legitimate AI agents can safely and transparently interact with travel systems.

Such a standard could include:

  • Verified agent identities with clear delegation from users or corporations
  • Defined scopes of access, distinguishing between browsing, quoting, booking, modifying, or cancelling
  • Rate limits and behavioral rules to prevent abuse or scraping
  • Strong logging and audit trails for compliance, dispute resolution, and security monitoring

For travel managers and TMCs, this is less about control and more about trust.


The Role of Governments and Regulators

Regulators will need to clarify accountability when AI agents cause financial, security, or privacy harm. Someone must remain responsible—whether that is the employer, the agent provider, or another party in the ecosystem.

Governments can also play a constructive role by:

  • Recognizing certified agent standards
  • Requiring disclosure when users interact with agents rather than humans
  • Setting minimum safeguards for delegated financial and identity-sensitive actions

 The goal should not be to slow innovation, but to make it safe enough to scale.

Done well, this combination of industry standards and targeted regulation can preserve innovation while preventing rogue automation and systemic abuse.


Final Thought

Personal AI agents may soon handle more of our travel lives than any booking tool or website ever has.

The question is not whether this will happen, but whether the travel industry helps shape it responsibly.

We can let agentic travel evolve amid chaos and litigation, or we can define standards that make automation as trusted, secure, and transparent as it is powerful. It is time for industry bodies like IATA, GBTA, etc., to take the initiative.

I explore related ideas around automation, control, and decision-making across business travel in my writing on AI in Business Travel.


References 

  • John Koetsier, “Amazon Vs. Perplexity: Welcome To The Battle For The Future Of Commerce,” Forbes, November 2025.
  • Reuters, “AI agents: greater capabilities and enhanced risks,” April 22, 2025.
  • Samsung SDS, “In the Era of Agentic AI, What Are the Evolving Security Threats?,” November 2025.
  • Mindgard, “Top 10 AI Security Risks (and How to Protect Your Systems),” October 2025.
  • Netwoven, “State of AI Identity Threats 2025: How Generative AI Is Reshaping Identity Security,” November 2025.
  • Browserless, “Is Web Scraping Legal in 2025? Laws, Ethics, and Risks Explained,” August 2025.
  • ActiveFence, “Key Security Risks Posed by Agentic AI and How to Mitigate Them,” 2025.
  • Digital Commerce 360, “Amazon–Perplexity clash exposes AI fault line in B2B ecommerce,” November 2025.

Enjoyed this post?
Subscribe on Substack to receive my latest writing directly in your inbox.

Leave a Reply

Discover more from Notes by Rajeev Goswami

Subscribe now to keep reading and get access to the full archive.

Continue reading