While Silicon Valley is finding that artificial intelligence (AI) is proving a tough sell to businesses and consumers, cybercriminals worldwide have lost little time in adapting the technology to cybercrime. The latest rogue AI offering is GhostGPT. According to Abnormal Security, Ghost GPT follows hard on the heels of earlier illicit AI offerings: WormGPT, WolfGPT, and EscapeGPT. To test its capabilities, Abnormal Security researchers asked GhostGPT to create a Docusign phishing email. The chatbot produced a convincing template with ease, demonstrating its ability to trick potential victims
The Mozilla Foundation released research that unveils that all 11 romantic AI chatbots tested, failed security and privacy tests. All 11 chatbots feature data privacy concerns, pulling much more data than is needed from the collective 100 million users of these chatbots. Mozilla urges these chatbots to minimize exploiting vulnerable users through more transparent data privacy practices.
Sign in to your account