We have been consistently told for quite some time that AI is going to revolutionize cybersecurity. It has already caused disruptions by automating repetitive, high-volume tasks like log analysis and anomaly detection. But a lot of other tasks, like monitoring and threat hunting, baselining user behavior, and threat response, need a human in the loop. What we can definitely say about the present state of AI-backed security is that it most certainly hasn’t completely freed analysts from the burden of false positives or has detected and stopped that many breaches in real time. What it has done is augment the SOC engineers’ workflows, not automate them. So we need to ask: what can be done to make AI safer for use?
Black hat, white hat
A unique thing about AI is that it is used by both sides. While we have already talked about AI use in security tools, it is also being used by the opposite side to create threats like deepfakes. The implementation of the technology has also introduced newer threats like shadow AI. IBM’s cost of a breach report 2025 covers 600 organizations that have suffered breaches. One in five organizations reported a breach due to shadow AI. If that’s not scary enough, only 37% have policies to manage AI or detect shadow AI. The report states that organizations with high levels of shadow AI observed an average of $670,000 in higher breach costs than those with a low level or no shadow AI.
Constrained technology
We have seen how AI can be both a sword and a shield, depending upon the usage. But the sword and shield have chips that can lead to both shattering, usually at the wrong time, too. Some of these chips are:
Poisoned LLMs: Attackers can deliberately “teach” AI models to misclassify malicious activity by feeding in manipulated data.
Limited Context Awareness: AI is very efficient at pattern recognition but struggles with understanding business context. It can misinterpret events like legitimate but unscheduled data backups as anomalous, leading to the generation of false positives.
Shadow IT & SaaS Usage: AI tools may not see unsanctioned SaaS apps or BYOD devices unless they are integrated with strong discovery tools (CASB/SSPM). This invisibility creates blind spots that can become attack gateways, especially if the AI tool wasn’t trained to monitor them.
GIGO and encryption: The AI model’s quality depends on the quality of data it trains on. Poor, incomplete, or biased data fed into AI models leads to blind spots and skewed detection (garbage in, garbage out). AI also struggles with encrypted data unless it is paired with decryption infrastructure.
This list is not exhaustive. But the points listed above are enough to underline why using AI for security purposes is not a silver bullet. Humans are still needed to take final decisions.
According to Sofia Ali, Associate Director & Principal Analyst at QKS Group, “AI is helping in cybersecurity, but it’s not ready to work on its own. It can speed up tasks like spotting unusual activity, but it often misses the bigger picture and can even create new risks like shadow AI. The real value of AI in security is when it supports human experts, not replaces them.”
Checklist for “AI-powered security”
Keeping in mind the novel threats associated with security tools that use AI, here is a small checklist that can help you choose the secure fit security product:
- Training data quality: Has the AI model been trained on a diverse type of data, rather than only the one with the vendor? This ability will ensure that the software can go to work almost immediately.
- Integration: Does the software connect with users’ existing tech stack? How seamlessly does it work with the users’ SOAR/SIEM workflows?
- Transparency: Can the analysts be able to see and trust the actions that prompted an action by the software?
Final take:
AI-backed security has many valuable benefits. But you cannot leave everything to the machines. You need to strike the right balance between the machine and the humans to elevate your security posture.