US President Joe Biden has issued an executive order aimed at regulating artificial intelligence (AI), urging Congress to pass the necessary legislation as swiftly as possible. The announcement was made only 48 hours before tomorrow’s Global AI Summit in the UK, which US Vice President Kamala Harris will attend. The push to swiftly legislate indicates that the threat of AI is being taken seriously globally, with governments taking a coordinated approach. A mass of legislation and backroom deals with IT companies is surely set to follow.
The White House said yesterday that “AI’s challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support the safe, secure, and trustworthy deployment and use of AI worldwide. “
The avowed aim of the Executive Order is to protect the most vulnerable members of society, particularly children and gullible consumers, from exploitation. It, therefore, focuses heavily on several key aspects of cybersecurity, including the use of AI in AI-enabled fraud and deception. AI is frequently used in so-called “phishing” attacks where a carefully crafted email is sent to an unsuspecting recipient purporting to come from a trusted colleague, relative, or friend. These emails generally contain a link that has been weaponized to secretly hack a trusted recipient’s device.
The first commercially successful early adopters of generative AI, such as Microsoft-backed ChatGPT, were cybercriminals. AI is already proving invaluable to wealthy ransomware gangs such as LockBit when targeting top executives or key personnel. It enables not only quickly crafting bespoke malware but also trawling social networks for personal details of the target and then using AI to write a plausible-sounding email.
Bogus AI-driven deepfake videos
AI is also already used to create plausible-seeming but bogus ransom demands from loved ones or colleagues traveling overseas. Sometimes, the demands are accompanied by the voice of the ‘kidnap victim’ pleading to ‘do whatever they ask.’ There are fears that AI will enable ordinary con artists to create realistic-looking videos of their ‘victims’ being abused using the kind of AI-driven deepfake technology used for political misinformation campaigns.
To try and prevent these types of crimes from being committed, the US Department of Commerce plans to develop guidance for content authentication and watermarking to label AI-generated content clearly. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic. The White House hopes that this will provide an example for the private sector and governments around the world.
But, as with most government legislation aimed at curbing the excesses of new technologies, it fails to fully account for recent developments in cybersecurity, where criminals have been using AI from the get-go. They have been able to do so while remaining immune from the authorities by orchestrating their crimes from the anonymity of the Dark Web. Rather than be observed using legitimate services such as ChatGPT, the cybercriminals have already developed their own Dark-Web version, FraudGPT. It is doubtful whether they will feel obliged to watermark it in accordance with the US administration’s wishes.
Other areas highlighted in the White House Fact Sheet outlining the Executive Order include AI threats to critical infrastructure, chemical, biological, radiological, and nuclear facilities.