The European Union (EU) Artificial Intelligence (AI) Act, which came into effect earlier this month, is now set to act as a template for other regions, such as the US. The American government has already drafted an AI Bill of Rights, which aims to create a similar framework regulating AI.
However, while governments are rightly concerned about the personal privacy aspect of the universal adoption of AI, some have a dangerously bullish view of the new technology’s potential. Despite a deluge of hilarious howlers, such as Google’s AI-driven images of African Vikings and American founding fathers, politicians anxious not to be left behind in the tech race swallowed Silicon Valley’s AI hype hook, line, and sinker.
If anything, new international regulation is needed to bridle state-level enthusiasm for a “new technology” that is already proving to have been overhyped and prematurely released to an unsuspecting public. For example, Argentine President Javier Milei has announced the creation of an Artificial Intelligence Unit Applied to Security (UIAAS). It will use “machine learning algorithms” to analyze “historical crime data” with the goal of “predicting future crimes and contributing to their prevention.” Whether Milei hasn’t actually seen the 2002 Spielberg movie Minority Report, starring Tom Cruise, which presents a Dystopian scenario based on precisely this idea, or whether he did see it and assumed he was watching a scientific documentary is not yet clear.
Blind faith in AI is not restricted to Argentina
What is, however, abundantly clear is that Milei’s blind faith in the infallibility of AI is not only restricted to Argentina. In the US, it was reported in late 2023 that President Joe Biden was galvanized into taking AI seriously after watching a more recent movie starring Tom Cruise, Mission: Impossible – Dead Reckoning Part One, where the super-villain is an AI program that has gone rogue.
Last year, former UK prime minister Rishi Sunak was also seen to have swallowed the AI Kool-Aid when he declared that there is a “Risk that humanity could lose control of AI through the kind of AI sometimes referred to as ‘superintelligence’… Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale risks such as pandemics and nuclear war”. The UK’s newly-elected Labour government is, however, understood to have quashed Sunak’s planned £1.3 billion investment in AI.
The new EU AI Act is more modest in its expectations for AI and aims to establish specific technical requirements in areas such as data quality, human oversight, accuracy, transparency, and accountability. The act uses a risk-based approach with different requirements based on the potential harm of the AI system. Organizations must conduct conformity assessments for high-risk systems. It is hoped that this classification guide sets a global standard for the development and use of AI.