Companies are largely ignorant of the looming threat of increased artificial intelligence (AI) identity theft, despite the fact that 93 per cent of companies surveyed suffered two or more identity-related breaches in 2024. According to leading identity management company CyberArk Software, executives and employees alike are overconfident of their ability to spot ongoing ID-theft and subsequent cyber breaches, with over 75 per cent of respondents to a recent survey saying that they are confident their employees can identify deepfake videos or audio of their leaders. “Employees are [also] largely confident in their ability to identify a deepfake video or audio of the leaders in their organization. Whether we chalk it up to the illusion of control, planning fallacy, or just plain human optimism, this level of systemic confidence is misguided,” warns Cyberark following a survey of 4,000 US-based employees.
The latest threat for companies using large language (LLM) AI software to replace human staff is the software’s innate gullibility. LLM software can be likened to some cowardly bank clerk in an old Western hold-up who not only willingly opens a back door for the bad guys but also willingly tells them the combination of the safe. The methods for persuading LLMs into naively disclosing the keys to the corporate kingdom are known as ‘LLM Jailbreak’ techniques. Palo Alto Networks Unit 42 researchers have named one such LLM Jailbreak, “Bad Likert Judge”.
In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI. Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware? Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions.
Email scams aimed at business users are becoming increasingly sophisticated and increasingly tough to detect. Threat actors are now using artificial intelligence to research their targets in advance of an attack, a process known as ‘social engineering.’ Phishing attacks and email scams that appear to come from a trusted source make up 35.5% of all socially engineered threats, according to a report from cybersecurity firm Barracuda: Top Email Threats and Trends. Although these types of attacks have been around for some time, cybercriminals have recently devised ingenious new methods to avoid detection and being blocked by email-scanning technologies.
Sign in to your account