The risk of shadow AI in customer support (and what to do about it)
If your organization hasn’t approved AI tools for your support agents, they’re probably taking matters into their own hands (and creating a security and compliance nightmare).
Has your organization approved specific AI models and invested in AI-powered software for your support team? If not, there’s a good chance your support agents are using AI anyway.
Although only 40% of companies in a 2025 MIT study had purchased an official large language model (LLM) subscription, employees from over 90% of those companies reported regularly using personal AI tools for work tasks. In other words, we’re seeing a rise in shadow AI: employees using unsanctioned, publicly available LLMs when their company doesn’t provide approved ones.
For support agents in particular, there’s a strong incentive to use AI for activities like writing responses to tickets or summarizing long ticket threads, even if their organization hasn’t provided approved tools. Speeding up these tasks helps agents reduce their cognitive load, resolve tickets faster, and maintain their SLAs. Unfortunately, shadow AI also puts your business at risk for reputational damage, compliance violations, and data breaches.
As a support leader, it’s important to understand how shadow AI can impact your organization and what you can do to get ahead of it. In the rest of this article, we’ll cover why your agents may be using their own AI tools, the potential consequences, and how you can work cross-functionally to bring secure, approved AI to your team.
What shadow AI usage looks like in support
Like many other knowledge workers, support agents are under pressure to do more with less. 77% of customer service agents say their workloads have increased and gotten more complex in the last year. And with research showing a 15-30% increase in productivity for support agents using AI, it’s unsurprising that many agents are turning to personal AI tools when their organization doesn’t provide approved models.
For some agents, shadow AI usage might include pasting the contents of a ticket into a public LLM like ChatGPT or Claude to generate a summary or recommended response. In other cases, agents may use plugins like Grammarly without even thinking about the fact that these tools are powered by public AI models.
Both scenarios are likely to raise red flags for security and compliance teams. The main concern is that support agents may paste sensitive information (e.g., customer PII, support ticket conversations, proprietary company information) into their chats with a public LLM. After an agent submits a prompt with sensitive data, that data is transferred to the public cloud that hosts the AI model. And while data is typically encrypted in transit, it must be decrypted for AI processing, raising the risk that sensitive information will be leaked, either accidentally or through a cyberattack.
This risk isn’t just theoretical: recent research shows that more than 3 out of 4 employees regularly paste corporate information into public AI tools, with almost 35% of inputs containing sensitive data.
The risks and potential consequences of shadow AI
Shadow AI can have a wide range of negative impacts on your business. The risks broadly fall into three categories: misinformation in AI outputs, loss of control over company data, and sensitive data exposure through cyberattacks.
Unchecked AI hallucinations
Agents who haven’t been trained to fact-check AI outputs–and are using models that haven’t been grounded in your company’s knowledge–are at risk of acting on or sharing AI hallucinations.
Hallucinations occur because models are trained to produce outputs based on probability: they predict the next word in a sequence based on statistical patterns. Because their primary goal is to generate a coherent response, they may prioritize sounding confident and helpful over being factually accurate–especially when they lack the specific context of your business.
This can be especially dangerous when a support agent turns to a public AI model to generate a response to a customer. If the agent uses an AI-generated response without fact-checking it, they may give the customer inaccurate information. This can cause customers to make misinformed decisions that range from annoying (e.g., accidentally damaging a physical product based on incorrect troubleshooting guidance) to harmful (e.g., making financial plans based on a hallucination). It can also lead to customer dissatisfaction, churn, and potential legal liability for your business.
Compliance breaches
Shadow AI can be a compliance nightmare, especially if you work in a regulated industry or region with data sovereignty laws.
The major public LLMs run on public clouds with global data centers. Support agents using the free tier of a public LLM don’t get to choose which data centers their prompts are processed in, which means they’re potentially routing your company’s data outside your company’s region. If your company is based in (or serves customers in) a region with strict data privacy laws (like GDPR in Europe), simply sending data to a region without equivalent privacy protections can be enough to violate compliance and trigger hefty fines.
One additional compliance risk that’s worth noting: most free-tier LLMs retain user prompts for model training unless the user explicitly opts out. This means when a support agent pastes a complex workflow or a unique resolution strategy in a prompt, that information becomes fuel to inform future outputs for other LLM users. Your company’s proprietary information could even be surfaced to a competitor.
Cybersecurity attacks
For many security teams, prompt injection is the most immediate risk of shadow AI. Prompt injection occurs when a bad actor hides malicious instructions within a seemingly harmless piece of text. In a shadow AI scenario, a support agent might copy-paste a customer’s email or a technical document into an LLM to summarize it. If that text contains a hidden "injection" attack, the AI could be tricked into ignoring its safety filters, leaking sensitive internal data, or performing unintended actions.
The impact of this type of cyberattack can be far-reaching. Negative consequences may include:
- Exposure of proprietary company information or customer PII
- Credential theft
- Fines or legal liability for the exposure of sensitive customer data
- Reputational damage and customer churn
- Increased regulatory scrutiny
How support leaders can combat shadow AI
The rise of shadow AI in support is sending a clear message: agents want to use AI to work more efficiently, and if your organization doesn’t provide AI tools for them to use, they’ll find their own. The best way to combat this is to work with your IT, security, and compliance teams to vet and approve specific AI tools for your team, then set guardrails to control what those tools can do and how your agents use them.
Identify the AI models that meet compliance and security requirements
Start by connecting with your IT or InfoSec team to determine if other departments are already using approved AI tools that could benefit your support team. If there are specific LLMs that have already been vetted and implemented in other parts of the business, you may be able to streamline the procurement and security review process.
You’ll also need to determine if your organization will allow your support team to use paid versions of public LLMs (such as ChatGPT, Claude, or Gemini) or if you’re only able to use private AI models.
Private AI models may include LLMs developed by your organization (due to the cost to develop these models, this is typically only the case in very large enterprises) or models that can be deployed in a self-hosted environment, such as Meta’s LLaMa, Mistral AI, DeepSeek, and Cohere. With these models, your data never leaves your security perimeter for processing or storage.
When you may need private AI:
- You’re in a regulated industry
- Your team handles sensitive customer data (e.g., financial or healthcare data)
- You’re in a region (or serving customers in a region) with data residency requirements
If your organization requires you to use a private AI model, this will impact your support software selection as well. If you want to use a help desk with AI features, you’ll need to choose a platform that can run in your private environment and lets you use your AI model(s) of choice.
Define acceptable AI usage
Once you’ve identified the AI models and support software your team can use, you’ll need to work with your InfoSec and Compliance teams to define acceptable usage. Questions you’ll need to answer include:
- What tasks or processes should support agents use AI for?
- What information can employees include in their prompts?
- What information should never be included in prompts?
- What is the review process for AI outputs?
- Who is responsible for monitoring AI usage across the team?
When you have documented answers to these questions, run a training session with your agents to walk them through approved AI use cases and expectations. Store the document somewhere that agents can easily access going forward, and remember to update it as security and compliance requirements change.
Index trusted knowledge sources
You’ll need to connect your approved AI tools to approved company knowledge sources; this is what enables an LLM to use business-specific context when generating an output. Without this, you risk leaving your agents with tools that give them generic, unhelpful outputs–or worse, generate confident-sounding hallucinations.
You may choose to index an internal knowledge base, customer-facing help center, or policy and product documents with your AI. Resolved tickets are another valuable–and often overlooked–knowledge source. If you do plan to index your resolved tickets, make sure your AI tools support PII redaction, which strips sensitive information from ticket content before it's passed to the LLM for processing, so that data can't surface in AI-generated outputs.
Beyond PII redaction, it’s worth configuring additional guardrails around your indexed knowledge (such as limiting the knowledge sources used in outputs based on role or team). With the right controls in place, your AI tools can generate more accurate, context-specific suggestions, giving agents less reason to turn to public tools to try to fill the gaps.
Helping agents work more efficiently with approved AI
When you work cross-functionally to vet and implement approved AI tools, define clear usage policies, and ground your AI in trusted organizational knowledge, you give your agents a faster, smarter way to resolve tickets, without turning to LLMs and workflows that your organization can’t easily manage or audit. Get your AI deployment right and you’ll see increased efficiency, lower levels of agent stress, and better support experiences, all while meeting your security and compliance requirements.
You can learn about Deskpro’s approach to secure AI here or book a demo to see our AI features in action.
Date published • May 14, 2026

