What is the similarity between Generative AI and the Marvel Comics supervillain Thanos? Both are inevitable. Generative AI is being increasingly used everywhere. The problem is that everything is fine when the AI is generating content based on the employees’ ideas, but what happens when it starts generating content based on the employee data? That too without the management’s knowledge?
Shadow AI’s growing shadow
Welcome to shadow AI. It works on the same principle as shadow IT and is even more dangerous. We already know the effects of AI on decision-making and cognition. As per IBM, from 2023 to 2024, the adoption of generative AI applications by enterprise employees grew from 74 percent to 96 percent. Today, over one-third (38%) of employees acknowledge sharing sensitive work information with AI tools without their employers’ permission.
Increased productivity = increased risk
Apart from ease of use, what are some other factors behind the increased use of AI? To start with, these tools can be accessed frictionlessly. Some of them can be used seamlessly as a bundle through the services provided by the companies. The corporate culture further complicates the situation, as teams or individuals use AI to complete the task/s at hand more quickly. Thus, it is also allowing increased productivity. However, one key point is overlooked: these tools use black box architectures. This means you have no clarity about what happens to the data put in the software. Its inner workings are completely opaque. This factor poses dangers that are worse than shadow IT. Shadow AI is faster-moving, less visible, and can be embedded into workflows without triggering any alarms.
The inadvertent insider
The scariest part of this whole scenario is that employees using shadow AI are unlikely to think they are doing something wrong. The AI is allowing them to increase their productivity, which is good for the organization. But their methods introduce various tripwires that can end up costing the organizations. Let us focus on the sentence mentioned in the opening paragraph: what happens when the AI starts generating content based on the employee data? GenAI is being used for a wide variety of tasks, from marketing to coding. What if confidential data is entered into the AI? This can include client details and code protected under IP laws. Similarly, unvetted data obtained from the AI can be used to support decision-making. The primary threat is posed by the AI’s black box architecture. Since there is no knowledge about the AI’s inner workings, organizations face the danger of losing control of IPs and running afoul of data compliance norms. Employees are the unsuspecting risky insiders in these cases. Another scary part? These interactions may be able to escape auditing. There are auditable plans, but they are very, very costly, and even if such plans are made available by the organization, can we be sure the employees are sticking to them? There is some part of governance that also comes into the picture: has the organization laid out clear policies about AI use, data sharing, and vetting tools?
Fighting fire before it erupts
The danger is real, so what immediate steps can be taken?
Enable responsible AI usage: It is almost impossible to block AI tools. Work with IT, legal, and procurement teams to vet such platforms. Lay clear outlines about which tools to use, what data is strictly off-limits, and establish clear processes for providing employees access to requested tools.
Discover and map AI surface area: Establish clear visibility into the organizational AI use. Know which tools are in use by employees by using tools like CASBs and endpoint monitoring. Check for use of sensitive data, such as internal emails, contracts, or sales and customer records. Check where data is going. Black box architecture means the data can be used to train the language models. Also, check if AI use is affecting your decision-making processes or documentation.
Final word:
While AI in workplaces has improved productivity and is empowering innovation, shadow AI has introduced newer security risks. QKS Group security analyst Venkatesh Kopparthi elaborates, “Shadow AI blurs the line between productivity and policy. It introduces a new class of insider risk, which is unintentional, invisible, and algorithmically amplified.” The solution is not blocking, but in Venkatesh’s own words, switch from user-centric monitoring to AI-aware governance that focuses on intent, context, and data flow visibility.
