AI: Defend with it, defend against it
Overview
While AI is revolutionizing how we defend our organizations, it’s also becoming a target in its own right. As organizations embed AI and machine learning (ML) into their products, back-end systems, and business logic, attackers are adapting to exploit these new targets. They’re scanning for exposed models; looking for applications with large language model (LLM) integrations; and connecting the dots between misconfigurations, vulnerabilities, and sensitive data. AI is transforming security workflows, and securing AI itself has become mission-critical. The next evolution of threats and defense are here, and AI is at the center of both.
AI for security
Attackers automate multiple stages of their attack chains to gain speed during cloud-based operations. The CRYSTALRAY campaign in mid-2024 was a prime example of adversarial speed and precision, using a combination of seven open source tools to automate reconnaissance, lateral movement, and credential discovery. Each tool, when chained together, served a unique function, streamlining each step of the attack life cycle, reducing dwell time, and leaving less time for defenders to react.
Defenders have to fight speed with speed. AI is helping security teams drive their investigations from hours down to minutes and improve mean time to respond by 76%. AI can correlate, recommend, and act faster than human analysts can, so it naturally becomes an extension of the team.
AI is helping security teams drive their investigations from hours down to minutes and improve mean time to respond by 76%.
Sysdig Sage adoption by business sector
We’ve seen a 27% increase since the beginning of the year in the number of companies choosing to use Sysdig Sage™, the first fully integrated AI cloud security analyst, as a partner and force multiplier for accelerating defense. More organizations – over 50% of our customers – are choosing to use this generative AI security analyst because it expedites and simplifies complex and time-consuming security efforts such as investigations and the vulnerability management process.
It comes as no surprise to see the software and business services industries as quick adopters of Sysdig Sage because innovation typically moves fastest within these sectors, and their services support all other industries. However, many AI adoption surveys yield different results, suggesting that top AI adopters may be in the financial, automotive, manufacturing, or healthcare sectors, with little consistency. A key reason for these discrepancies is that many software companies and business service providers serve a diverse market and may be categorized under different industries. For example, a financial software-as-a-service (SaaS) provider may fall under the financial industry. As a result, reports may vary from one another.
AI for security recommendations
1.
Enrich and correlate data
Use AI to enrich data with relevant context; correlate related events; identify semantic insights; and recommend actions faster than a human analyst can, because speed matters.
2.
Summarize and connect related intelligence
Use AI to summarize alerts; identify similar alerts or prior incidents; and highlight related intelligence, common vulnerabilities and exposures (CVEs), or techniques.
3.
Simplify security processes
Use AI not just to automate and expedite tasks, but to decode cloud complexity by highlighting patterns, explaining behaviors, and providing understandable and actionable insights.
4.
Keep a human in the loop
Treat AI recommendations like tier-one analyst advice and add human confirmation to maximize trust.
Security for AI
AI is transforming how we defend and changing every aspect of business operations. From 2024 to 2025, Sysdig observed nearly 500% growth in the number of AI and ML packages running in workloads. This surge shows that AI is not just powering security; it’s embedded in product features, the customer experience, back-end business operations, and more.
In the last six months, however, the total number of AI and ML packages running in workloads has declined by 25%. This reduction is actually a sign of maturing AI adoption. It is likely because organizations are consolidating into well-maintained libraries and possibly removing shadow AI, or unmanaged AI deployments, therefore reducing the burden on security teams and the business risk footprint. Organizations are being more strategic and selective with their workloads, reducing redundancy and technical debt. Cleaning up shadow AI is a sign of improved visibility, governance, and inventory management.
However, with great power comes great responsibility. Sensitive and proprietary business data is intentionally or unintentionally fed into LLMs through product functionality or misuse, making it imperative to keep these workloads secure.
Moreover, in 2024, the Cloud Native Computing Foundation (CNCF) reported that 54% of AI and ML workloads were being built on Kubernetes, the most well-known and widely used open source container orchestration platform. AI tools are being built with open innovation, which is both evidence of the power of open source and a reason to take your organization’s security into your own hands.
Open source software (OSS) and tools sometimes present their own unique, yet well-defined and easily remediated security challenges, such as:
- A lack of maintenance and regular support.
- An overreliance on tens or hundreds of other libraries, increasing supply-chain risk.
- Public availability of source code, which means that attackers can study and identify vulnerabilities before maintainers patch them.
- Insecure default configurations, which include no authentication, open APIs, and no network restrictions.
reduction in the total number of AI and ML packages running in workloads in the last six months
of AI and ML workloads are built on Kubernetes
of organizations are developing their own AI applications
In June of this year, the Sysdig Threat Research Team (TRT) detected an attacker targeting and exploiting an exposed open source AI tool. Publicly available tools like Shodan make it easy for attackers to find any door left open. OSS tools are designed to be flexible and easy to use, which is why the admin panels default to no authentication, services listening on all interfaces, APIs left accessible over the internet, and more. When implementing an open source tool, it must be properly configured; otherwise, you risk providing attackers with privileged access and default credentials. Leaving a workload unintentionally exposed to the internet allows attackers to, for example, perform detailed infrastructure reconnaissance, exploit vulnerabilities, run cryptominers, or launch distributed-denial-of-service (DDoS) attacks.
Amid rapid AI evolution, there are still avenues to secure and control this expanding attack surface. Spurred on by the development of AI and ML on Kubernetes and the massive adoption of the technology across organizations, real-time cloud-native best practices are key to mitigating risk.
Sysdig, for example, made risk identification quick and easy to avoid incidents similar to Open WebUI by using an icon to indicate when a workload containing an AI package is exposed. Just four months after rollout, we saw a 38% reduction in public exposure of workloads containing AI packages. Furthermore, by prioritizing AI security, organizations can safeguard both their models and their most sensitive data with simplified security visibility and a stronger posture.

Public exposure of workloads containing AI
April - December 2024
January - June 2025
However, exposure is a calculated risk companies take in order to share an application with customers or end users. The API must be accessible, which is an attacker attractant; therefore, all other security safeguards like authentication, authorization, runtime visibility, and governance are critical.
In fact, the public exposure we reported at the beginning of the year doubled over the last six months from 13% to 27%, likely due to an increasing number of organizations sending their AI applications to production environments, in other words, going to market. Gartner reported in early June 2025 that 38% of organizations are developing their own AI applications. [1] This is a true mark of progress, signaling the acceleration from experimentation to deployment in the last six months. A high bar has already been set for AI security, and this shift raises the stakes. As AI applications mature, so must their security.
[1] Gartner, Opening Keynote: Harness the Hype: Turning Disruption into Cybersecurity Opportunity, Leigh McMullen and Katell Thielemann, 9 June 2025.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Security for AI recommendations
1.
Control external API access
Expose as few AI endpoints as possible to minimize the attack surface, secure exposed endpoints by enforcing authentication, and apply rate limits.
2.
Configure admin interfaces
Enable multifactor authentication and limit exposure to trusted networks and authenticated services only.
3.
Enforce least privilege
Limit the number of root users and privileged service accounts that can manage the AI workloads.
4.
Communicate usage policies
Establish a companywide AI policy based on organizational data classification, use rights, and permitted tools and services, and educate employees to reduce the risk of shadow AI and unauthorized tool use.
5.
Implement guardrails
Deploy contextually aware middleware or otherwise analyze prompt input and model output to filter and flag sensitive data and potential threats.
6.