Smarter support: How regulated industries can adopt AI safely
Support teams in regulated industries are frequently blocked from using AI due to data security and compliance concerns. But they can get the CX benefits of AI by taking a thoughtful approach to infrastructure and implementation.
Support leaders know that AI has the potential to intelligently automate workflows, improve their first response and resolution time, and uplevel their customer experience (CX). But unfortunately, many of these leaders in regulated industries have been blocked from rolling out AI due to concerns over security, compliance, and customer trust.
While 92% of tech companies are using some level of AI for customer support, only 58% of organizations in regulated industries can say the same. And those that are using it haven’t necessarily gone live: more than half (52%) of support teams say they’re still in the planning or pilot stages with AI.
The primary issue is that SaaS support platforms and the large language models (LLMs) that power their AI capabilities run in the public cloud. While public cloud providers invest heavily in cybersecurity, those security measures aren’t always enough for organizations in industries like financial services, government, aerospace, healthcare, and utilities. These organizations must comply with strict data security regulations, and security breaches can lead to fines, legal actions, and reputational damage.
At the same time, CX is as much of a competitive advantage in certain regulated industries as it is for tech or retail companies. For example, a 10-year McKinsey analysis found that CX leaders in banking delivered 55% higher total shareholder returns compared to banks with low CX performance. In healthcare, satisfied patients are 28% less likely to switch providers.
Regulated industries shouldn’t have to choose between security and the CX benefits of AI. The key is combining secure infrastructure with AI use cases that always keep humans in the loop.
The AI challenges in regulated industries: Security and customer trust
While it’s safe to say data security breaches aren’t a good thing in any industry, they can be especially devastating for organizations that handle sensitive, proprietary data or personally identifiable information (PII), such as customers’ bank account details or patients’ healthcare histories. While these organizations may face fines for failing to comply with data security regulations, that’s arguably not the worst part of a data breach.
Leaking sensitive data puts customers at risk for a wide range of negative outcomes, including having their bank accounts compromised, becoming victims of spear phishing scams, or even experiencing medical identity theft. Even without experiencing these specific negative outcomes, breaches often cause customers to lose trust in the organizations that failed to protect their data. Once lost, that trust is hard to get back.
AI options for teams that can’t deploy in the public cloud
Many organizations in regulated industries can’t afford the risk of using customer support software and AI tools that are deployed in the public cloud, even with rigorous cybersecurity guardrails in place.
To understand why regulated industries can’t take this risk, it’s important to have a basic grasp of how data is transferred between and processed by customer support platforms and AI models. Most AI-powered SaaS platforms send data to public cloud-hosted large language models (LLMs)--these models process natural language prompts and relevant indexed content to generate an output, such as a recommended response to a customer question or a summary of a long ticket conversation, within the support platform. While that data is typically encrypted in transit, it must be decrypted within the cloud provider’s infrastructure for AI processing.
Cloud providers put strict security measures in place to prevent one customer’s data from being exposed to another, and to keep bad actors from accessing and exfiltrating data, but no security measure is completely infallible. And with the shared infrastructure of the public cloud, a single vulnerability in a shared service or platform can expose all the organizations that rely on it.
The issue for regulated industries ultimately comes down to control. With public cloud infrastructure, businesses lack control of their data once it leaves their own environment, have limited visibility into data access controls, and must rely on their SaaS vendor and cloud provider to manage security and patch potential vulnerabilities. Additionally, LLMs offer a new attack vector for cybercriminals, with attacks such as Prompt Injection becoming commonplace, and it is challenging for CISOs to keep up.
The good news is that support teams in regulated industries that can’t use SaaS platforms and AI tools in the public cloud can turn to alternative deployment options, including:
- Virtual private cloud (VPC)–An isolated private network set up within a public cloud provider’s infrastructure. Communication to AI models on the public cloud provider’s AI services can be supported by strong encryption.
- Sovereign cloud–An isolated network that keeps data within a specific geographic area. These may be country-specific, such as Le Bleu in France, or region-specific, such as AWS EU Sovereign Cloud.
- On premise–Customer support platforms and AI models run entirely within an organization’s own environment, whether that’s a private data center, colocation facility, or through local, private deployment with AWS Outposts, Microsoft Azure Local, or Google Distributed Cloud.
The deployment model organizations choose will depend on their specific security, compliance, and data sovereignty requirements. It’s essential for support teams to involve their security and compliance teams early in the decision-making process to ensure they choose a deployment type that allows them to take advantage of their customer support platform’s AI capabilities.
Human-in-the-loop AI for regulated industries
In addition to choosing a deployment model that meets their organization’s security requirements, support teams in regulated industries need to be thoughtful about how they use AI. While customer-facing chatbots are the most well-publicized example of AI in the support industry, this isn’t where most support teams (especially those in regulated industries) want to start.
Customer-facing AI chatbots present a couple of key challenges. There’s a risk that chatbots will share inaccurate or outdated information with customers, leading them into making misinformed decisions. There’s also a risk that the chatbot could be prompted into exposing PII or proprietary information. While there are things companies can do to mitigate these risks, such as using a PII redaction tool and configuring strict guidelines around what a chatbot can and can’t answer, there’s no way to eliminate these risks completely: they’re inherent in AI outputs that aren’t first vetted by the support team.
The better approach for support teams in regulated industries is to start with human-in-the-loop AI use cases, such as:
- Intelligent ticket triage. AI can analyze ticket content for customer sentiment, intent, and conversation topics, then route each ticket to the most appropriate support agent or team. This can be combined with rule-based automation to help manage agent workloads and ensure customers have their issues addressed promptly and accurately.
- Ticket summarization. AI can summarize long ticket conversations to give support agents the most important information upfront. This can be especially useful for tickets that are handed off from one agent to another.
- Suggested responses for agents. AI can analyze ticket content and suggest knowledge base articles or responses for support agents to use. If an organization is using resolved tickets as an AI knowledge source, the AI can also draw upon previous successful responses when generating its recommendation. Support agents can then review and edit suggested responses for accuracy before sending them to the customer.
- Knowledge base article generation. AI can help fill the gaps in an existing help center or knowledge base by using content in resolved tickets to generate new articles. As with suggested responses for agents, a human should always review articles before publishing. This approach gives support teams an efficient way to grow their customer- or public-facing knowledge base without compromising accuracy.
These use cases can improve agent productivity, self-service, and the customer experience while ensuring no AI outputs get in front of a customer until they’ve been reviewed, edited, and approved by the support team. Support teams in regulated industries shouldn’t overlook the potential benefits–and lower risk–of implementing behind-the-scenes AI.
Combining secure infrastructure with human-in-the-loop AI
AI is reshaping what customers expect from support interactions. Customers aren't just benchmarking their experience with a bank or healthcare provider against other banks or healthcare providers—they're benchmarking it against the best support experience they've had anywhere. As AI raises the bar for CX across every industry, regulated organizations that can't adopt it risk falling behind on customer satisfaction in a way that's increasingly difficult to recover from.
The good news is that support teams in regulated industries don’t have to choose between data security and CX. By choosing customer support platforms and AI deployment models that keep sensitive information in controlled environments, organizations can address the security and compliance concerns that have kept AI on the sidelines. And by using human-in-the-loop AI use cases, they can build and maintain trust with both customers and internal stakeholders.
With the right infrastructure and a thoughtful implementation strategy, AI can deliver meaningful improvements to agent productivity and customer satisfaction without putting data or trust at risk.
Date published • March 17, 2026

