California-based cybersecurity goliath Palo Alto Networks has issued a bullish revenue forecast based on a perceived rising global demand for artificial intelligence (AI)-driven security products.
“In Q2 [2025], our strong business performance was fuelled by customers adopting technology driven by the imperative of AI, including cloud investment and infrastructure modernization,” said CEO Nikesh Arora. “Our growth across regions and demand for our platforms demonstrates our customers’ confidence in our approach. It reaffirms our faith in our 2030 plans and our $15 billion next-generation technology annual recurring revenue goal.”
Citing continued enterprise investment in cybersecurity solutions, Palo Alto Networks raised its full-year revenue forecast for fiscal 2025 (FY25) to between $9.14 billion and $9.19 billion, exceeding previous projections. Next-generation security annual recurring revenue is predicted to reach between $5.03 billion and $5.08 billion, reflecting a year-on-year growth rate of next-generation security annual recurring revenue (ARR) of between $5.03 billion and $5.08 billion, reflecting a year-on-year growth rate of up to 34 percent. Last month, Palo Alto also announced that it would be working with IBM UK on a multi-year project to develop an Emergency Services Network in the UK.
Response to AI-powered cyber-attacks
The anticipated rising demand for AI-driven security is largely a response to enterprises needing to defend their networks against increasingly sophisticated AI-driven cyber-attacks from cybercriminals and potentially hostile nation-states. Palo Alto Networks’ bullish forecasts come hard on the heels of news that cybercriminals have been swift to weaponize AI. The latest rogue AI offering is GhostGPT. Ghost GPT follows earlier illicit AI offerings: WormGPT, WolfGPT, and EscapeGPT.
“GhostGPT is a chatbot specifically designed to cater to cybercriminals…By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems,” said cybersecurity Abnormal Security.
Cybercriminals have also been quick to see nefarious possibilities in other legitimate AI offerings, such as search engine giant Google’s new Gemini 2.0 AI assistant. According to Google’s own findings, nation-state-backed threat actors are already leveraging Gemini to accelerate their criminal campaigns.