The verdict on artificial intelligence (AI) from the real experts is finally in; professional cybercriminal fraternities have judged AI to be “overrated, overhyped and redundant,” according to fresh research from cybersecurity firm Sophos.
It has, hitherto, been accepted wisdom in the cybersecurity industry that cybercriminals, free from any regulatory authority or moral scruples, were among the first to harness the awesome power of AI to create bespoke and virtually unstoppable malware. However, having infiltrated the Dark Web forums where top professional cybercriminals discuss their trade, Sophos reports that the cybercrime sector has thoroughly tested the capabilities of AI and found it wanting.
“There is a lot of skepticism about tools like [Microsoft-backed] ChatGPT – including arguments that it is overrated, overhyped, redundant and unsuitable for generating malware,” Sophos reports from the criminals’ forums, adding that cybercriminal groups were equally dismissive of Dark-Web AI spin-offs such as EvilGPT and DarkGPT.
Politicians’ grasp on reality weakens
Tomorrow, November 30, it will be exactly one year since OpenAI launched ChatGPT and, free of any restrictions and frequently backed by the cyber resources of nation-states such as China, Russia, North Korea, and Iran, organized international cybercriminal groups have had plenty of time to test the exaggerated claims made for AI. They have discovered, for example, AI cannot even code useable malware — let alone convincingly mimic every aspect of human behavior.
It was only a week ago that UK Prime Minister Rishi Sunak warned the world that the potential power of AI poses as great a threat to humanity as nuclear warfare, echoing similar sentiments expressed by US President Biden, whose focus on AI is said to have sharpened after watching Mission: Impossible – Dead Reckoning Part One while at Camp David this year.
Even Jonathan Swift, author of Gulliver’s Travels, would find it tough to satirize the reality gap that now exists between the political view of AI as something that could take over the human race and the dawning reality that it is simply another software offering that has been drastically over-hyped by Silicon-Valley salesmen and science-fiction writers.
There is, however, a very real and present danger that the leaders of the Free World may start to impose restrictive and nonsensical regulations in a misguided attempt to curb the awesome but largely mythical power of AI.
EU a threat to US tech development
As long ago as April 2021, that vast unelected bureaucracy, The European Commission, set the regulatory wheels in motion, proposing the first EU regulatory framework for AI, stipulating that all AI systems must be analyzed and classified according to the potential risk they might pose to users. But as Europe is struggling to develop any significant AI capacity of its own, AI developers in countries including the EU’s two leading players, France and Germany, are starting to kick back against the EU’s ludicrous but nonetheless draconian proposals.
According to sources at last weekend’s AI conference held in Paris’s vast high-tech complex, Station F, there is a mounting feeling of anger at the prospect of restrictions that would effectively hobble the development of any new AI models. There are also very real fears that, like the EU’s General Data Protection Regulation (GDPR) before it, EU regulations on AI would have repercussions for US companies with online customers inside the EU.