Tag: openai

AI system blackmails its creator

Artificial Intelligence (AI) is learning to think like a human. But the critical question now being asked in IT circles is: “What kind of human?” Claude, Opus 4, a groundbreaking new AI system released by AI developers Anthropic on Tuesday, is attempting to blackmail its creator by exposing an alleged extramarital affair. This follows on from other AI systems programmed to interact with humans effectively, lying by making up fake information, a phenomenon known by developers as “hallucinating”.

3 Min Read

Chinese AI offering rattles Big Tech investors

The start of this week saw roughly $1 trillion wiped off leading US tech stocks, following the launch of Deepseek, a Chinese rival to AI offerings such as Microsoft ChatGPT. What has really spooked the markets is that the Chinese artificial intelligence (AI) assistant uses less data and generates lower all-round costs than its current Silicon Valley rivals. The expense of training and developing DeepSeek’s models is claimed to be only a small fraction of that required for OpenAI, putting into question the need to invest in the latest and most powerful AI accelerator chips from Nvidia. At the start of trading this week, Shares in Nvidia dropped a full10 percent and AI data analytics company Palantir lost seven percent in pre-market trading. Microsoft, Google’s parent company Alphabet, and Meta all also experienced a drop in their share price.

3 Min Read

Musk deems “Apple Intelligence” offering insecure

Bereft of fresh ideas or new products, Apple’s main offering at its long-awaited annual Worldwide Developer's Conference in Cupertino, California, is a cobbled-together artificial intelligence (AI) offering. While AI may be Silicon Valley’s latest buzzword and marketing tool, “Apple Intelligence,” as Apple AI is branded, is already attracting heavy criticism – even from other tech giants. By pairing Microsoft-backed OpenAI’s ChatGPT with Apple’s voice-activated assistant, Siri, Apple hopes to make AI mainstream. But its critics say that all Apple has done is create a cybersecurity nightmare for corporations while sounding a death knell for the personal privacy of Apple users. "It's patently absurd that Apple isn't smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!... Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river,” says Elon Musk, Tesla and SpaceX founder and the owner of X Corp, formerly Twitter.

3 Min Read

Ticketmaster Breach Data Posted on New BreachForums – May 31st

The 'ShinyHunters' threat actor group posted data from a Ticketmaster data breach, potentially belonging to 560M users, asking for $500K in exchange for the data. Analysts at Vx-Underground analyzed a sample of the Ticketmaster data and determined that the data was authentic, containing entries dating back to 2011.

2 Min Read

AI is fueling China’s cyber war against the US

Once again, China is harnessing new Western technology to attack and undermine the US at home and overseas. According to a new report from Microsoft, this time, China is using AI-generated fake social media accounts to influence the outcome of the upcoming US presidential elections. The report, Same targets, new playbooks: East Asia threat actors employ unique methods, details China’s recent attempts to discredit the US government, including misinformation regarding: the Kentucky train derailment in November; the Maui wildfires in August; the disposal of Japanese nuclear wastewater, illegal drug use in the US as well as exacerbating the increasing racial tensions across the US.

3 Min Read

OpenAI’s voice cloning raises security concerns

OpenAI, the maker of Microsoft-backed consumer-facing artificial intelligence (AI) service ChatGPT, may have scored something of an own-goal with the unveiling of Voice Engine, billed as “a model for creating custom voices”. While OpenAI’s blog on Friday highlights the legitimate use of voice cloning, sometimes referred to as ‘deepfake voice’, such as providing reading assistance to non-readers and children, its widespread availability could soon metamorphose into a cybersecurity nightmare. Deepfake voice and video software are already being used by cybercriminals to mimic the voices of senior executives to commit financial fraud and other crimes. But the widespread availability and marketing of deepfake voice software is now set to make cybercrime a virtual cottage industry where any number can play. It will open the floodgates to a whole new generation of cybercriminals, terrorists, pranksters, and disgruntled employees.

4 Min Read

Rise in Tax-Related Phishing Scams Detected – March 22nd

Microsoft's Threat Intelligence arm issued a warning on the rise of new, sophisticated tax phishing scams that could lead to stolen personal and financial data. These tax-related phishing scams are initiated by impersonating trusted employers, tax agencies, and payment processors. Victims click on a malicious attachment, which leads to a believable landing page designed to capture sensitive information.

2 Min Read

Security Flaws Found in ChatGPT Plugins – March 15th

According to Salt Labs research, third-party OpenAI ChatGPT plugin security flaws could allow attackers to install malicious plugins, and hijack third-party website accounts. Leveraging security gaps in ChatGPT plugins' large language models (LLMs), OAuth workflow, and PluginLab both feature weaponizable vulnerabilities.

1 Min Read

11 Romantic AI Chatbots Fail Security Tests – February 15th

The Mozilla Foundation released research that unveils that all 11 romantic AI chatbots tested, failed security and privacy tests. All 11 chatbots feature data privacy concerns, pulling much more data than is needed from the collective 100 million users of these chatbots. Mozilla urges these chatbots to minimize exploiting vulnerable users through more transparent data privacy practices. 

1 Min Read

H2 2023 Dominated by AI Malicious Activity and Android Spyware Threats – December 28th

According to an ESET report, the threat landscape of the second half of 2023 was dominated by AI-generated malicious activity and newly emerged Android spyware. Coming from ESET's "Threat Report: H2 2023," based on the firm's recorded incidents, the report also states that a new economy has arisen from OpenAI API keys, especially for cybercriminals.

1 Min Read

OpenAI Launches Initiative to Tackle Growing AI Risks – November 6th

OpenAI has announced a new team, intended to counter the risks brought by generative AI systems. Labeled the "preparedness" unit, the new OpenAI branch will be tasked to set preventive measures for systemic AI risks which include individual persuasion, cybersecurity, autonomous replication and adaptation, and chemical, biological, radiological, and nuclear (CBRN) threats.

1 Min Read