The AI Security Dilemma: Navigating the High-Stakes World of Cloud AI

AI presents an incredible opportunity for organizations even as it expands the attack surface in new and complex ways. For security leaders, the goal isn't to stop AI adoption but to enable it securely.
Artificial Intelligence is no longer on the horizon; it's here, and it's being built and deployed in the cloud at a staggering pace. From leveraging managed services like Microsoft Azure Cognitive Services and Amazon SageMaker to building custom models on cloud infrastructure, organizations are racing to unlock the competitive advantages of AI.
But this rush to adoption brings a new, high-stakes set of security challenges. The Tenable Cloud AI Risk Report 2025 reveals that the very platforms enabling this revolution are also introducing complex and often overlooked risks.
Our analysis uncovered a stark reality: AI workloads are significantly more vulnerable than their non-AI counterparts. A staggering 70% of cloud workloads with AI software installed have at least one critical, unpatched vulnerability, compared with 50% for workloads without AI software. This makes your most innovative projects your most insecure.
Jenga®-style risks in managed AI services
One of the most significant challenges stems from the way managed AI services are built. Cloud providers often layer new AI services on top of existing infrastructure components, a concept we call "Jenga-style" architecture. For example, a managed notebook service might be built on a container service, which in turn runs on a virtual machine.
The problem? Risky defaults and misconfigurations can be inherited from these underlying layers, often without the user's knowledge. This creates a complex and opaque stack of permissions and settings that is incredibly difficult to secure. A default setting that allows root access on an underlying compute instance, for example, could be inherited by the AI service, creating a critical security flaw that isn't visible in the AI service's top-level configuration.
Our research found specific, risky defaults in popular services:
- Amazon SageMaker: Instances were found with root access enabled, giving a potential attacker complete control.
- Amazon Bedrock: Training data buckets were configured without the "block public access" setting enabled, and often had overly permissive access policies.
Securing the AI revolution
For security leaders, the goal isn't to stop AI adoption but to enable it securely. This requires a proactive and AI-aware security strategy. Here are four recommendations:
- Extend vulnerability management to AI tools: Your security program must account for the unique software stack of AI development. This includes popular libraries like TensorFlow and PyTorch, as well as the underlying infrastructure. The high rate of critical CVEs in AI workloads shows that basic vulnerability hygiene is more critical than ever.
- Scrutinize managed service configurations: Do not trust the defaults. When deploying managed AI services like Amazon SageMaker, Google Cloud Vertex AI or Azure Cognitive Services, conduct a thorough review of the underlying permissions and configurations. Understand the "Jenga stack" you're building on and harden every layer. Ensure that data storage for training models is properly secured and not publicly accessible.
- Implement strong identity and access controls: AI models and, often, the data they are trained on are incredibly sensitive assets. Apply the principle of least privilege rigorously. Who and what can access your training data? What permissions does the AI model have at runtime? An attacker who compromises a model could potentially poison it or, worse, use its credentials to move laterally across your environment.
- Adopt a unified security platform: The interconnected nature of AI risks — from an underlying vulnerability to an exposed data bucket to an overly permissive role — demands a unified view. A cloud-native application protection platform (CNAPP) that incorporates powerful data security posture management (DSPM) and pairs cloud security posture management (CSPM) and AI security posture management (AISPM) will identify the sensitive data in your cloud environment and correlate the different types of risk, providing insights essential to understanding your true exposure and identifying the most critical attack paths.
AI presents an incredible opportunity, but it also expands the attack surface in new and complex ways. By understanding these unique risks and applying foundational cloud security principles, you can ensure your organization's journey into AI is both innovative and secure.
Discover the full scope of AI and cloud risks in our latest reports.
➡️ Download the Tenable Cloud AI Risk Report 2025 to learn more.
➡️ Download the Tenable Cloud Security Risk Report 2025
➡️ View our on-demand research webinar
JENGA® IS A REGISTERED TRADEMARK OWNED BY POKONOBE ASSOCIATES.
- Cloud
- Cloud
- Research Reports