The UK-hosted Artificial Intelligence (AI) Safety Summit due to take place on Wednesday and Thursday this week, attended by world leaders and AI experts, is set to become the focus of a widening global debate on the dangers of AI. Last Thursday, UK Prime Minister Rishi Sunak set out the agenda for the discussion, coming down heavily on the side of the AI doom-mongers, who once again are warning that AI poses an existential threat to humanity itself.
According to Sunak, there is a “Risk that humanity could lose control of AI through the kind of AI sometimes referred to as ‘super intelligence’… Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war”.
But some AI experts due to attend this week’s AI summit, being held at Bletchley Park, famous for Alan Turing’s code-breaking during World War II, are now arguing that governments would do better to concentrate on the real and present dangers proposed by new technologies such as AI rather than urging people to panic over fanciful science-fiction-style narratives of machines taking over humanity.
Yann LeCun, chief AI scientist at Meta and winner of the Turing award, recently commented that a number of “conceptual breakthroughs” would be needed before AI could reach human-level intelligence – a point where a system could evade human control.
“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” LeCun added.
Others from the commercial sector also argue that politicians must urgently try to tackle the immediate and tangible dangers posed by publicly-available generative AI services such as Microsoft-backed ChatGPT. According to risk and compliance specialist Riskonnect, while 93 percent of companies anticipate significant threats associated with the widespread rollout of AI, less than one in ten, only nine percent, say that they are properly prepared to tackle AI risks. Companies’ chief concerns regarding the widespread adoption of AI are data privacy and cyber issues (65 percent), closely followed by the threat of employees making poor business decisions based on erroneous information (60 percent).
While politicians and sociologists stress about the danger of AI being used by the unscrupulous to create deepfake images to spread misinformation ahead of elections, business users are understandably more troubled by consumer-facing AI’s tendency to simply make things up. This very disturbing flaw in generative AI is weirdly referred to in AI circles as “hallucinating” and frequently results in AI supplying falsified information in the form of false financial figures, fictional experts, and other made-up data.
Speaking ahead of this week’s AI summit, Sunak said that global solutions are needed to address the challenges now presented by AI: “AI doesn’t respect borders…I believe we should take inspiration from the inter-governmental panel on climate change.”
But there are now rapidly growing tech-industry fears that senior politicians like Sunak may feel obliged to introduce speedy and badly-thought-out legislation aimed at controlling the future uses of AI without fully grasping either the technology behind it or its severe limitations when it comes to real-world applications.
AI not sentient – merely a digital tool
While attending the International Information System Security Certification Consortium’s recent ISC2 Security Congress 2023, Kyle Hinterburg, manager at information security specialist LBMC, emphasized that AI tools are not sentient beings, despite the many fantastical predictions currently being made by politicians and media, but are merely digital tools that have been developed, trained and used by humans.
Nevertheless, there are very real fears that AI could be used by unscrupulous and ruthless nation-states to use AI to accelerate the development of new threats. This week’s coming AI summit, which will be held on Wednesday and Thursday, is expected to be attended by EU chief Ursula von der Leyen, although democratically-elected world leaders such as US President Joe Biden, Germany’s Olaf Scholz, France’s Emmanuel Macron and Canada’s Justin Trudeau will not be in attendance.
However, the list of those who will be attending includes representatives of the Chinese Communist Party. Former UK prime minister Liz Truss has demanded that Sunak, her successor, rescind his invitation to China because of Beijing’s ‘cavalier attitude’ to international laws.
“The key question is now, who are they sending over? And there are some very, very nasty, dodgy people that are running this area [AI] in China… China poses the single biggest threat to us in this area’” says former UK conservative party leader Sir Iain Duncan Smith.
Duncan Smith, who has already been sanctioned by Beijing over his outspoken criticism of human rights abuses in Xinjiang, warns that a combination of AI and genomics (DNA research) is enabling China to develop new and sinister threats against Western democracies.
“They’re the world leaders in genomics…They forcibly extract genes from people in Xinjiang. They abuse people, and they steal that data. Genomics coming together with AI is the bit that simply won’t be properly discussed at this conference because it’s there where China poses a very big threat,” warns Duncan Smith.
The US has also announced plans for an executive order on AI, following a pledge earlier this year by US President Joe Biden promising executive action to ensure “America leads the way toward responsible AI innovation.”