Companies using public artificial intelligence (AI) services such as Microsoft-backed ChatGPT are at increasing risk of allowing cybercriminals to access confidential data. According to cybersecurity firm Group-IB’s Hi-Tech Crime Trends Report 2023/2024, between June and October of 2023, over 130,000 unique hosts with access to OpenAI were compromised, representing a 36 percent rise over the first five months of the year. Companies currently take one of two main approaches to integrating AI into workflows. One is to use public AI models and the second is to create bespoke proprietary AI systems based on pre-trained and available models. The second approach is by far the safest as it helps control data exchange with AI systems at every stage, guaranteeing confidentiality. But this is far more expensive and labor-intensive than using more insecure publicly available AI services.
Group-IB discovered a new iOS Trojan named "GoldPickaxe.iOS" that was built to steal facial recognition data from infected iOS devices. The 'GoldPickaxe' Trojan abuses the TestFlight exploit, which sends users innocent URLs that downloads the malware when clicked. According to Group-IB, the stolen biometric data is used to gain unauthorized access to banking accounts.
The UK's Newsquest Media Group reported a cyberattack that disrupted the company's websites and apps to the UK National Cyber Security Centre (NCSC) on Monday, December 11th. The UK media company with over 250 local news sites' stated that the series of Distributed Denial-of-Service (DDoS) attacks disrupted the reading experience of an estimated 48 million monthly readers.
Sign in to your account