Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    IT Infrastructure in 2026: What lies ahead? 

    December 22, 2025

    QKS SPARK Matrix YoY comparison of Bot Management Market 2024 and 2025 

    December 19, 2025

    Why your SOC playbook should include ID-centric detection? 

    December 17, 2025
    LinkedIn
    Infosec TechBuzz Friday, January 2
    LinkedIn
    Get In Touch
    • About Us
    • Blog
    • Domains
      • Monitoring, Response & Threat Intelligence
      • Application, Data & Identity Protection
      • Infrastructure & Endpoint Security
      • Governance, Risk & Human-Centric Security
    Infosec TechBuzz
    Home » Vibe Coding: Undertones of Danger 
    Application, Data & Identity Protection

    Vibe Coding: Undertones of Danger 

    NikhilBy NikhilOctober 9, 2025
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Vibe or AI-assisted coding is the “in” thing right now. It is clear why. First and foremost, AI models have now improved to the extent of producing workable code. Consequently, AI can allow faster coding, which means a shortened development life cycle and quicker time to go to market. Unlike the parable, the markets favor the hare, not the tortoise. Lastly, it allows people with less coding experience to generate production-level code.  

    Unfortunately, since it also uses LLMs, it is also as much of a minefield as GenAI outputs. First and foremost, since developers are not developing code from scratch and are dependent on a source that is either a black box or using open-source libraries, it’s anyone’s guess about how secure the code is. If its training data includes previous vulnerabilities or is of low quality, there are high chances that the code created through such “learned” source will pose the same problems that are part of the data it has trained upon. As per a study, 40 to 60 per cent of AI-generated code includes security flaws.  In addition, the code may use vulnerable or malicious third-party libraries. Like other AI applications, it is also prone to hallucinations, causing a phenomenon (un)fondly called “slopsquatting” that will allow bad actors to introduce malicious code snippets. This “ability” poses a serious risk of supply chain attacks, as attackers can exploit such tainted code to achieve their objectives, complicating security governance. 

    Along with security, the code quality itself can be suspicious. What if developers accept the code blocks without fully understanding and validating them? Repairing such degraded and fragmentary codes is a pain in the you-know-where. No wonder there are now new jobs that involve just cleaning up the garbage code. While big enterprises may have the resources for such a job, it may not be possible for smaller businesses. 

    Talking about black boxes, this is another problem: the AI learns from the data it is fed. What happens when the data is proprietary? Are you comfortable with feeding the AI with your company’s IP? These people from Samsung tried it way back in 2023, and the result was that the company banned the tools from its workplaces. The coding tools can send project context or snippets containing secrets and sensitive data to external AI APIs if not properly configured. The other danger with workplace use of AI, which is Shadow AI, can also crop up for vibe coding. It democratizes coding by allowing coding with prompts. What is stopping people from developing functioning apps that the security teams have no knowledge or involvement about? The security team will also have no visibility into these apps and workflows. Having no visibility can stretch the attack surface considerably. This invisibility also makes it difficult to maintain, test, or scale secure applications, giving rise to a phenomenon titled technical debt.  

    QKS Group Security Analyst Lokesh Biswal warns, “While AI-assisted coding accelerates development cycles, organizations must recognize they’re entering a security minefield. With 40-60% of AI-generated code containing security flaws and emerging threats like ‘slopsquatting,’ the speed advantage becomes a liability without proper safeguards. Organizations must implement comprehensive security controls or risk trading development velocity for major security debt.”

    To-Do List to Ensure Safe Vibe Coding  

    Area Control/Practice Why? 
    Automated Builds Enforce successful compilation and dependency resolution before integration. To prevent broken or incomplete AI-generated code from entering the main pipeline. 
    Automated Testing Integrate unit, integration, and regression tests for each AI-generated code block. To detect functional issues, regressions, and edge cases unique to AI suggestions. 
    Static Application Security Testing (SAST) Run static code analysis on every commit, especially for AI-generated code. To find weaknesses like logic flaws, insecure patterns, and dependency vulnerabilities early. 
    Secrets and Sensitive Data Scanning Scan code and config for hardcoded secrets/tokens with every commit or PR. To prevent accidental credential leaks common in fast-moving development. 
    Signed Commits and Provenance Require signed commits and validate artifact integrity before promoting to production. To ensure only authorized changes and trusted code are shipped. 
    Branch Protection Rules Require all AI-generated code to be merged via pull requests with required code review and approval. To ensure unreviewed experimental code does not reach production. 
    Dependency & SBOM Scanning Use automated tools (like Dependabot) to scan, update, and alert on vulnerable dependencies. To mitigate supply chain risk from third-party libraries. 
    Runtime Security Controls Use feature flags or canary/blue-green deployments to control live rollout of AI-generated features. To limit exposure and impact from issues cropping up during production. 
    Least Privilege Access Apply granular privileges to pipeline agents, bots, and developers within CI/CD. Minimizes blast radius in case of compromise. 
    Logging & Monitoring Enable auditing for all pipeline actions, deployments, and automated changes. Provides visibility to quickly detect and investigate suspicious activity. 
    Context & Prompt Auditing Track AI prompts and generated code as artifacts in VCS or a side ledger. Supports reproducibility and security review of AI interventions. 
    Regular Human Security Reviews Incorporate mandatory peer review for high-impact or sensitive changes generated by AI. “Trust but verify” approach counteracts emergent, subtle flaws. 

    Final word: 

    Vibe coding has democratized coding, as it allows people with no coding knowledge to build code just by describing what they want, and the AI builds the code. No wonder it is also being used by CEOs. But like all newer technologies, it also has downsides that people may not know about or are yet to be discovered. So go full speed ahead, with extreme caution.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Avatar
    Nikhil

    Related Posts

    IT Infrastructure in 2026: What lies ahead? 

    December 22, 2025

    QKS SPARK Matrix YoY comparison of Bot Management Market 2024 and 2025 

    December 19, 2025

    Why your SOC playbook should include ID-centric detection? 

    December 17, 2025
    Leave A Reply Cancel Reply

    Demo
    Top Posts

    IT Infrastructure in 2026: What lies ahead? 

    December 22, 2025

    QKS SPARK Matrix YoY Analysis for the In-App Protection Market 2023-2024

    June 18, 2025

    QKS SPARK Matrix YoY Analysis for The User Authentication Market 2023-2024

    June 27, 2025

    QKS SPARK Matrix YoY Analysis for Zero Trust Network Security Market 2023 vs 2024

    June 19, 2025
    Don't Miss
    Blogs

    IT Infrastructure in 2026: What lies ahead? 

    By NikhilDecember 22, 20250

    Networking is essential for both humans and technology to progress further. Like humans, IT infrastructure has also been shaped by the evolving changes…

    QKS SPARK Matrix YoY comparison of Bot Management Market 2024 and 2025 

    December 19, 2025

    Why your SOC playbook should include ID-centric detection? 

    December 17, 2025

    Ransomware 2026: Better, Faster, Smarter?

    December 15, 2025
    Stay In Touch
    • LinkedIn

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Demo
    About Us
    About Us

    The buzz stops here

    A no-frills resource for professionals who want facts, not fluff. We cut through the noise to bring you what matters in cybersecurity, risk management, and compliance — straight to the point.

    LinkedIn
    Quick Links
    • Home
    • About Us
    • Blog
    Most Popular

    QKS SPARK Matrix YoY analysis for the DDoS mitigation market 2023-2024

    QKS SPARK Matrix YoY analysis for the insider risk management market 2023-2024

    QKS SPARK Matrix YoY analysis for the insider risk management market 2024-2025

    • Home
    • About Us
    • Blog
    © 2026 Designed by TechBuzz.Media | All Right Reserved.

    Type above and press Enter to search. Press Esc to cancel.