The word “SaaSpocalypse” has been mainstreamed following the introduction of Anthropic’s Claude Cowork AI tool and the subsequent bloodbath in share markets. The reason is pretty obvious. You do not need to log into anything. Simply put, you get a conversational interface rather than the application. You do not need to log into a CRM, marketing platform, or support tool and perform tasks inside it. You can simply “talk” with the tool to get the job done. What this does is make the SaaS product a background data source. This shift reduces the vendor’s control over the user experience and weakens brand loyalty, because the assistant, not the application, becomes the daily touchpoint. Which ALSO means that until the users keep getting the desired result, they will care less about which specific SaaS product is used underneath, which fundamentally alters how value, loyalty, and pricing are determined in the software market, hence the panic.
However, the news of the demise of SaaS and traditional software programming, to quote Mark Twain, may have been exaggerated. Vendors like Salesforce have already embraced AI as a critical part of their offerings. In addition, we also have to accept the fact that vibe coding (along with its related issues) is here to stay. Taken together, this means more security challenges, which will only get more novel and harder to deal with, as AI itself is still a developing technology.
Threat landscape
At present, the current threat landscape features the typical threats associated with AI, such as prompt injections, shadow AI, data leakage, account sprawl, and overprivileging. However, the real threats will emerge later as the tools get integrated into organizational workflows and mature.
The security risks created by AI-driven SaaS interactions are likely to emerge in phases rather than all at once, and the pace will depend more on how autonomous AI agents become inside enterprise systems than on the quality of AI-generated code. For the next 12 months, most organizations will remain in an early adoption phase. This phase is typically characterized by controlled experiments, pilot deployments, and a significant amount of shadow AI usage. Employees are already connecting AI assistants to SaaS tools for tasks such as summarizing data, drafting emails, or generating reports, but most actions still require human approval. In this period, the primary security issues will be shadow AI integrations, sensitive data leakage through prompts, over-privileged API tokens, and the growing number of unmanaged service or machine identities. These risks are already showing up in security reviews and identity governance assessments. But the overall impact remains moderate because AI is still mostly used in a read-only or advisory capacity, and senior engineers or architects often act as gatekeepers.
Between roughly one and three years from now, many organizations are expected to enter a more operational phase. This phase will see AI agents performing write actions across SaaS environments. Instead of only generating insights, AI systems will start updating CRM records, creating tickets, triggering workflows, and coordinating tasks across multiple applications. At this stage, the structural security problems will become more visible. Enterprises will face agent identity sprawl, with thousands of automation tokens and AI-driven service accounts. Attribution will become weaker, making it harder to determine whether an AI or a human initiated the action. Prompt injection attacks will move from theoretical concerns to real incidents, and cross-SaaS data exfiltration will become more common as AI agents aggregate and move information between systems. This is the phase where boards, regulators, and auditors will start paying closer attention to AI-related incidents.
Around the three-to-five-year mark, the deeper structural risks are likely to appear as AI becomes the primary interface to SaaS applications. Users will interact with systems through conversational agents, and many workflows will be executed autonomously without direct human involvement. In this environment, human accountability becomes less clear, and traditional identity and access management models begin to break down. Attackers may target the API layer that underpins AI interactions, and complex supply-chain attacks could emerge through compromised AI plugins, connectors, or agents. Compliance frameworks built around human decision-making will struggle to adapt to autonomous actions, forcing organizations to rethink governance and audit models.
Beyond five years, a longer-term scenario could emerge in which many business processes are largely autonomous, and AI agents manage transactions, negotiations, and operational decisions with minimal human intervention. In such an environment, new categories of risk may appear, including autonomous fraud, AI-to-AI attack chains across vendors, and economic manipulation of automated decision systems. Security will shift from protecting applications from users to governing large ecosystems of machine actors, and regulators may introduce entirely new frameworks focused on machine accountability.
QKS Group Principal Analyst Sujit Dubal advises, “Agentic SaaS is fundamentally changing the enterprise risk model. Organizations are no longer securing only applications and data. They are now governing autonomous decision-making entities embedded within business workflows. This shift demands a new control plane that connects SaaS posture, data readiness, identity governance, and AI oversight into a single operating framework. Vendors that continue to treat AI risk as an extension of legacy SaaS security will fall behind. The market is clearly moving toward platforms that can establish accountability, govern non-human identities, and secure the data foundations that power enterprise AI.”
However, there are several real-world constraints that moderate the timeline for these risks. Human oversight still plays a major role, as senior engineers and architects validate AI outputs before they reach production. Enterprise adoption cycles are slow, and large organizations take years to integrate new automation models. Compliance and regulatory requirements, especially in highly regulated industries, also slow the deployment of fully autonomous agents. As a result, the most severe security challenges are unlikely to become widespread immediately. The most common issues over the next year will involve shadow AI, data leakage, and over-privileged tokens, while more serious agent-driven incidents and identity sprawl are likely to become prominent within two to three years. The deeper architectural and governance risks tied to fully autonomous workflows are more likely to emerge in the three-to-five-year horizon. The central point is that the security curve steepens not because AI writes better code, but because it gains the ability to act independently inside enterprise SaaS environments.
Market Landscape
The Evolving Market Landscape is as follows:
Phase 1: The Immediate Focus (0-12 Months)
Currently, the market is heavily oriented toward SaaS Security Posture Management (SSPM) and mitigating Shadow IT. Organizations are prioritizing visibility into unmanaged agents and monitoring OAuth tokens to prevent data leakage. Key players like Obsidian Security, Valence, and Reco AI are positioned here to provide the discovery tools necessary to identify “shadow” AI integrations before they become entrenched.
Phase 2: Addressing Identity Sprawl (1-3 Years)
As AI agents move from “read-only” to performing “write” actions, the focus will shift to Machine Identity and Non-Human IAM (NHIAM). The challenge will be managing the sheer volume of automation tokens and service accounts. Vendors such as CyberArk, BeyondTrust, and Lacework are expected to lead this space by offering Just-In-Time (JIT) privileges and behavioral baselines to manage the lifecycle of agent tokens.
Phase 3: Securing Autonomous Workflows (3-5 Years)
In the mid-term, as conversational agents become the primary interface for SaaS, the security frontier moves to the API layer. Market leaders in API Security and Policy Engines, such as Salt Security, Traceable AI, and Styra (OPA), will be critical for providing runtime protection. Their role will be to enforce dynamic policies on agent-to-SaaS calls as workflows become increasingly autonomous.
Phase 4: Long-Term Ecosystem Governance (5+ Years)
The market will likely see a convergence of security functions within AI Agent Platforms. As business processes become largely autonomous, large-scale providers like Cloudflare (SASE), Microsoft (Defender), and Darktrace will likely offer integrated governance. These platforms will focus on anomaly detection across complex “AI-to-AI” attack chains and governing vast ecosystems of machine actors.
Vendor Landscape
| Phase/Risk | Category | Key Vendors | Why They Fit |
| 0-12 Months (Shadow AI, Data Leakage, Tokens) | SaaS Security Posture Management (SSPM) / Shadow IT | Obsidian Security, Valence, Reco AI | Visibility into unmanaged agents, OAuth monitoring, token discovery. |
| 1-3 Years (Identity Sprawl, Prompt Injection, Writes) | Machine Identity / Non-Human IAM (NHIAM) | CyberArk, BeyondTrust, Lacework | Agent token lifecycle, JIT privileges, behavioral baselines for sprawl. |
| 3-5 Years (Autonomous API Attacks) | API Security + Policy Engines | Salt Security, Traceable AI, Styra (OPA) | Runtime protection for agent-to-SaaS calls, dynamic policy for autonomy. |
| Long-Term (AI Ecosystems) | AI Agent Platforms (Integrated) | Cloudflare SASE, Microsoft (Defender), Darktrace | Converged governance for machine actors, anomaly detection across chains. |
