Currently, we’re all experiencing the impact of AI. Some say, without hint of hyperbole, that the impact of AI will be so drastic that life will never be the same.
However, even the release of the now ubiquitous ChatGPT (2022), processing more than a billion queries daily, isn’t likely to be a flashbulb memory. We don’t hear people
say, “I’m old enough to remember when AI changed the world,” because we’re in the middle of that change.
Different people seem to focus on different aspects of AI. Some people get fixated, with a mix of fascination and horror, on the AI of self-driving automobiles. Others direct the conversation toward LLMs like ChatGPT or Gemini, or Claude. There are others who can’t stop thinking about (and sharing!) the incessant deluge of AI slop images and videos in social media (e.g., “Meet Cabbage Hulk”).
The content can be simultaneously corny, visually arresting, confusing and geopolitically dangerous.
I’m old enough to remember the launch of mobile check-in, my first “boutique” hotel stay, and my first 360-degree online walkthrough to select a holiday destination. Like these memories, the examples in the above paragraph show how AI is touching our lives.
Meanwhile, AI is altering the cybersecurity threat landscape. When it comes to hospitality and data protection compliance, I’m old enough to remember the releases of PCI DSS (2004) and GDPR (2018). I’ve personally witnessed the head-scratching and handwringing that accompanied the swirling confusion around what compliance means and how to achieve it. To address the challenge, companies invest heavily. PCI DSS compliance, yearly, is a $10 billion industry. Add $5 billion to $8 billion invested in GDPR compliance, and that’s a sizeable figure.
At 250 times the yearly investment in PCI DSS and GDPR, $2 trillion to $4 trillion represents the estimated value that AI’s LLMs are expected to bring to travel. That is serious money and warrants serious attention; travel technology leaders, hoteliers and providers must focus on protecting this treasure trove of potential that AI systems give to their organizations and the industry. Organizations enjoy the enhanced productivity that AI systems deliver. But it’s often not monitored for accuracy or misuse. Perhaps even more concerning are the incidences of changes to baseline configuration (e.g., patches, firewall rules, permissions, etc.) that are neither intentional nor sanctioned.
According to estimates from Stanford’s 2025 AI Index Report less than 10% of small organizations adequately monitor their AI systems2. Put another way, 90% of small businesses have no visibility, post-deployment, into whether previously hardened configurations remain intact.
We’ve seen this movie before. The plot is simple. First, innovations are introduced. Then, organizations play fast and loose. They push forward with semi-tested technologies on semi-reluctant guests because they believe it to be necessary to remain competitive.
Currently, 56% of technical leaders feel the pressure to rapidly deploy AI-enabled systems for automation, communications and content creation.
Then things get dicey. Bad actors launch Do Anything Now (DAN)-style prompts to instruct AI models to simulate an unfiltered personality.
This opens the door to trouble for organizations that deploy LLM powered chatbots. The irony is this: What was meant to enhance guest communications with 24/7, localized and on-brand interactions, does quite the opposite when compromised. And this can happen at greater speed and scale.
Next, things get dangerous. In May 2025, Cornell University published a study on Dark LLMs3. The research shows how AI amplifies the risk of breach via models devoid of safety and security controls or by way of “jailbroken” systems. Such systems are modified to bypass safety, security and ethical constraints. From 2023 to 2024, there was a 56.4% increase in AI-related data incidents. Does anyone think that the rupture rate will decrease in 2025?
Finally, calls for self-regulation are made. A noteworthy example is NIST’s voluntary AI framework. Another is the EU AI Code of Practice.
This voluntary framework for developers of General Purpose AI (GPAI) models is intended to promote commitments to transparency, copyrights and safety and security. Such well-meaning appeals to the better angels of our nature fall short – they always do. Even though OpenAI and Anthropic (as well as many others) are on the list of EU Code signatories, Meta refused. They called it overreach and said it stymied potential.
The familiar story concludes. Standards, requirements and legislative regulations are proposed. They get enacted and become law. Enforcement happens. Innovation becomes codification. As with most security challenges, the concerns with AI systems involve unstructured data. The largely structured data of a cardholder data environment (CDE) for PCI compliance is hard enough to secure. Attempting to limit the processing of unstructured data like emails that have primary account numbers (PAN) and other personally identifiable information (PII) makes it harder. Those unstructured goodies are tasty morsels for the bad guys.
I asked ChatGPT what we should do. It suggested the following.
Organizations should focus on:
- Implementing AI-powered data classification tools to better identify sensitive information in unstructured data
- Establishing proper access controls and data governance frameworks
- Conducting regular security assessments of AI systems
- Ensuring compliance with evolving privacy regulations like GDPR and emerging AI-specific laws
But what are the data protection best practices of yesteryear?
- Know your data and where they are
- Control access to your data
- Monitor your data
- Encrypt your data
Three-fourths of ChatGPT bullets are identical to the suggestions of traditional cybersecurity experts. Apparently, thoughtful, purposeful and strategic data planning is still smart business.
The impact of AI hasn’t changed that. I’m old enough to remember that the more things change, the more they stay the same.