AI, like any technology, is fundamentally neither good nor bad. As always, it depends on who uses it and for what purposes. One thing is undeniable, however: AI is evolving faster than norms and laws, and lawmakers are continually struggling to keep up. Of course, the fact that AI innovates within AI itself doesn’t help. It triggers an unprecedented chain reaction of technological development.
All of this creates new, highly specific security challenges, the most recent of which is vibe coding. As with any AI innovation cycle, it is crucial to understand the fundamentals and the security implications.
But then, what exactly is vibe coding?
Originally, vibe coding is a new approach to software development. This shift is primarily explained by the evolving role of the developer. In the past, a developer had to manually write every line of code, then proceed through the usual cycle of review, testing, debugging, and deployment. Now, with the introduction of vibe coding, a developer, or even a hobbyist, can skip that initial step, let the AI write the code for them, and simply guide it, test it, and refine it.
On paper, the benefits are obvious. Developers can work more efficiently, it democratizes coding and opens the act of coding to untrained individuals, and it encourages creativity and experimentation, leading to the creation of new consumer-facing applications that are intuitive and easy to use. Even Google CEO Sundar Pichai has given it a try, saying that “it’s a real joy to be a developer,” after hinting that he enjoyed building a web app.
As with any AI innovation, and given the growing accessibility of AI tools, these tools are gaining visibility in the industry and practices are evolving. Just a few weeks ago, vibe coding company Lovable was in discussions for a valuation of $1.5 billion. What is clear is that you cannot stop the tide. The aim is to accompany the movement, build the necessary guardrails, and properly manage the associated risks.
But what are these risks?
Vibe coding represents a true innovation, but it can also fuel cyber threats. To be robust against current threats, organizations need secure, compliant, and maintainable code. In reality, malicious code does not need to be of high quality or durable to have an impact.
In today’s AI-driven threat landscape, bad actors can even use voice commands to generate malicious code and target vulnerabilities. Pushing the analysis further, AI agents will add another dangerous dimension. Although generative AI can provide coding capabilities within vibe coding, it still needs to be deployed and executed in isolation. That will no longer be the case when an AI agent takes on that responsibility.
Vibe coding can also create problems within security teams themselves. It is often practiced individually, which undermines the collaborative and agile nature of DevOps practices. Without structured programming and awareness of security issues, vibe coding can introduce invisible risks.
Defensive strategies
Vibe coding represents a leap in abstraction, enabling programmers to generate code from natural language. And while it lowers the entry barrier and democratizes access to programming, it ultimately increases the risk of misuse by unskilled users. Businesses must take a long-term view. Vibe coding is only the latest iteration of AI-driven attacks, and while it is easy to focus on the technology of the moment, organizations should be ready to defend against vibe coding — and against future innovations.
The first defensive strategy is to deploy a Zero Trust architecture. At its core, Zero Trust is a security process that assumes no entity should be trusted by default, even inside the network perimeter. The adage “if you can access it, you can compromise it” applies here fully. Thus, by reducing or eliminating the attack surface, you can protect yourself effectively. Next, platform-based technologies have substantial value. The intelligence that platform providers gain from millions of customers is invaluable. It’s a bit like herd immunity: if one solution is applied, it benefits everyone. In short, you leverage others’ participation in this model. Finally, it is essential for companies to adopt a proactive security posture, moving from defense to offense, what we call threat hunting. By mitigating risks before they worsen, organizations can strengthen their overall security posture.
Outlook
Ultimately, for reasons such as cost efficiency, AI will continue to disrupt how we work and, as a result, influence how we defend ourselves against the evolving threat landscape. In the future, vibe coding could involve multiple AI agents, each responsible for a facet of the process, with one agent handling creativity, another for security, and yet another for structure.
A well-executed cybersecurity program can become a revenue driver, enabling expansion into new markets, greater agility, and better business practices. If done poorly, it exposes organizations to risks tied to the latest AI innovations and trends. By adopting a long-term view of the threat landscape, deploying Zero Trust, and embracing a proactive security posture, organizations can thrive.
Martyn Ditchburn, is CTO of Zscaler for the EMEA region
Related topics
See all Cybersecurity articles
The Salesloft vulnerability hits multiple victims in the IT sector
By
Clément Bohic
3 min.
EU Cyber Reserve: who are the 45 selected suppliers
By
Clément Bohic
Firewalls: the market from the hybrid cloud perspective
By
Clément Bohic
The Salesloft vulnerability did not only affect Salesforce
By
Clément Bohic
The Ministry of the Armed Forces launches CND, a new pillar of the […]
By
Philippe Leroy