[Webinar] The AI Help Desk: The CX & Security Balancing Act. Feb 19th 10:00 am CT

Register Now
Reading time
alarm 5 mins

From AI Experiments to Real Results in Customer Support

Organizations are investing in AI solutions to improve their support operations and customer experience, but only 25% of AI projects have delivered their promised ROI, according to an IBM survey of over 2000 CEOs.

Authors
Name Madeline Jacobson / Role Content Marketing Manager

While there are many reasons AI projects fall short, data privacy and security concerns are often a major culprit. In many cases, a support team evaluates and pilots an AI solution based on features, but their security, compliance and legal teams shut the project down due to concerns about exposing private data to a public AI model. Gartner predicts that by the end of 2026, businesses will abandon 60% of AI projects unsupported by AI-ready data.

Data security must be front and center to protect AI investments. Support leaders who work with their security and compliance teams from pilot to production will maximize the value of AI, improving internal operations and external customer satisfaction.

Putting security first to protect AI investments

Support teams are realizing they won’t see the full value of their AI investments unless they factor in data security from the beginning. In a recent Deskpro survey of over 220 support and IT leaders, 81% of respondents said security is “very important” or “critical” when selecting support technology, and 78% involve their IT or security teams in the final decision-making process.

Security, compliance, and legal teams help support teams understand all applicable data protection requirements, including how data must be stored, processed, and reused. These requirements may limit or prevent the support team’s use of software that relies on AI services hosted by the public cloud hyperscalers, AWS, Microsoft Azure and Google Cloud . Data required for processing by the AI large language models (LLMs) is typically sent from a CX or help desk platform to the model services in unencrypted form–raising data security concerns, especially for organizations in regulated industries and those with strict security and data privacy requirements.

Support teams in those organizations should choose an AI solution they can configure in a controlled environment, such as a private cloud, sovereign cloud, or their own enterprise data center on-premise.

Controlled environments enable organizations to meet the security and compliance requirements of their business, industry, and region. For example, sovereign clouds store, process, and back up data at data centers within specified geographic boundaries, with technical controls in place to prevent the data from being accessed or processed outside the region. These days, most sovereign clouds are also investing in building out AI capabilities within their clouds. This helps organizations align with local data protection and sovereignty requirements, such as NESA and ISR in the United Arab Emirates, while still benefitting from AI innovation.

Controlled environments also eliminate the risk of private data being exposed to a public AI model. They open up more opportunities for support teams to launch and scale AI solutions and use cases, including behind-the-scenes agent assistance and customer self-service.

Managing cost in a controlled environment

When choosing a controlled environment for their AI solution, support teams also need to factor in cost. While some teams worry that running an AI model in a private environment will increase infrastructure and operational costs, there are ways to keep the cost down while still keeping data secure.

  • Virtual private clouds (VPCs) let you take advantage of a secure environment without having to build and run your own data centers. You get network isolation and security controls while keeping costs usage-based and avoiding the operational burden of on-premise infrastructure.
  • As with VPCs, sovereign clouds don’t require you to maintain your own data center. Sovereign cloud vendors invest in their infrastructure and share the cost across their customer base. While typically more expensive than standard VPCs, sovereign clouds can be more cost-effective than building and operating your own infrastructure on premise.
  • If your company is already investing in custom LLMs for enterprise-wide use cases, you can use those models to drive your own AI features in your chosen CX or help desk solution (as long as you choose a vendor that can configure the solution in your private environment).

In regulated environments, the goal is predictable, defensible costs that align with security and compliance requirements.

Overcoming the stalled pilot problem

Choosing a secure environment sets a strong foundation for any AI initiative–but support teams often still need to complete a pilot program to prove their chosen AI solution is safe, scalable, and valuable.This is where 95% of generative AI projects stall out and fail to deliver, according to a recent MIT study. To avoid getting stuck in the pilot phase, support leaders should:

  • Document data readiness. Work with security and IT teams to document data sources that will be connected to the AI solution, as well as where that data will be stored, who has access to it, and what guardrails have been put in place to protect it.
  • Define the objectives and success criteria. Identify one to three specific and measurable objectives, such as reducing average time to resolution or SLA attainment by a certain percentage. Define what thresholds need to be reached to exit the pilot and scale the AI solution.
  • Document roles and responsibilities. Outline who is responsible for what, including who manages the pilot program, monitors the program for compliance, evaluates data quality, and reviews escalated incidents.
  • Start small. Roll out the AI solution to one team or test it out with one limited application. Starting small can help teams minimize risk and show value quickly.

Improving AI performance without compromising company data

Setting up the proper data infrastructure before going live with AI is a huge advantage–because the organization controls where the data lives, the support team can connect the AI model with relevant internal data without worrying about exposing it to a public cloud.

Some organizations are even going so far as to build their own custom models, typically starting with open source LLMs such as Meta’s Llama or DeepSeek. Training the AI model on organizational information and internal data lets support teams deliver more personalized service rather than relying on generic responses from a public model–something that’s especially important with AI chatbots. “If you can give an AI chatbot access to the appropriate corporate data, you can have that chatbot deliver very timely and accurate answers,” says Brad Murdoch, CEO of Deskpro. “And if you’re doing that, it frees up human agents to focus on more complex, challenging issues, dramatically increasing productivity.”

With any AI use case, the quality of the data used to fine-tune or train the model will impact the quality of the outputs. The better the outputs, the better the productivity, customer experience, and ROI.

AI success starts with data readiness

From chatbots to agent assist tools to automated workflows, there is a wide range of AI solutions that have the potential to help support teams resolve issues more efficiently and increase customer satisfaction. But support teams will continue to be at risk of having their AI projects shut down–and failing to see a return on investment–if they don’t take data protection seriously from the beginning.

Support leaders must take a purposeful approach to any AI project, mapping out data requirements with their compliance and legal teams, establishing the necessary infrastructure for their AI, testing and learning from a pilot program, scaling the program, and fine-tuning the AI model. This approach reduces risk and increases the potential uses for AI, allowing support teams and their organizations to get the full benefit of the technology.

Date published • January 30, 2026