Apple has joined Google and Microsoft in launching its own generative artificial intelligence (AI) offering, OpenELM. Apple claims that OpenELM, “a state-of-the-art open language model,” will offer users more accurate and less misleading results than its widely criticized competitors. “OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy,” says Apple. Apple claims that OpenELM exhibits a 2.36 percent improvement in accuracy compared to its initial predecessor OLMo, while requiring half as many pre-training tokens. So far, Apple has delayed offering modern AI capabilities on its devices, but it is expected that the next version of its operating systems will need to include some unique AI features. The launch of iOS 18 is scheduled for June 10.
Five years after its proposal, European Union lawmakers approved the artificial intelligence law, a world-first on AI rules. Centered around consumer safety, the EU's AI Act takes a "risk-based approach" to AI-powered products.
Researchers from the Israel Institute of Technology, in collaboration with Intuit, and Cornell Tech developed the "Morris II Worm" to automatically leverage GenAI systems to spread malware and steal data. The researchers made the worm to demonstrate the dangers behind GenAI systems through the dangerous "0-click propagation" worm which unleashes unprompted payloads, allowing easier attacks from threat actors.
The reaction of businesses to the introduction of generative AI (GenAI) in the year since the launch of Microsoft-backed ChatGPT is one of increasing suspicion and disappointment. Over one in four organizations have banned the use of GenAI outright. The majority of companies are now also refusing to trust a technology that has already gained a reputation for making errors and even entirely fabricating information, a failing that is referred to as “hallucinating”. According to Cisco’s newly-released 2024 Data Privacy Benchmark Study, 68 percent of organizations mistrust GenAI because it gets results wrong and 69 percent also believe it could hurt their company’s legal rights. The study draws on responses from 2,600 privacy and security professionals across 12 geographies.
The New Year is set to start with a call to regulate artificial intelligence (AI) coming from a man whose views are considered by hundreds of millions of people to be infallible. On New Year’s Day, His Holiness Pope Francis is scheduled to issue a stark warning to the governments of the world on the dangers inherent in AI. On January 1, 2024, His Holiness will announce: “Techno-scientific advances, by making it possible to exercise hitherto unprecedented control over reality, are placing in human hands a vast array of options, including some that may pose a risk to our survival and endanger our common home”. Having warned that AI is a threat not to humanity but to the existence of the Planet Earth itself, His Holiness will then exhort “the global community of nations” to urgently adopt a binding international treaty to regulate not only the use of AI, but also its development.
Next Wednesday will see the last round in a “King Kong meets Godzilla"-style contest between the European Union and the global technology sector over proposed regulations from Brussels to control AI. The opening rounds have been fought by lawyers, lobbyists, and bureaucrats over the monitoring of foundation model AI services such as GPT-4, access to source codes, fines for disobeying the Brussels rulings, and other related topics. However, EU member states France, Germany, and Italy are known to be opposed to the EU’s proposed rulings and to favor self-legislation by the technology sector, as opposed to being constrained by hard rules dictated by Brussels. French AI company Mistral and Germany's Aleph Alpha have criticized the EU’s tiered approach to regulating foundation models, defined as those with more than 45 million users.
The verdict on artificial intelligence (AI) from the real experts is finally in; professional cybercriminal fraternities have judged AI to be “overrated, overhyped and redundant,” according to fresh research from cybersecurity firm Sophos. It has, hitherto, been accepted wisdom in the cybersecurity industry that cybercriminals, free from any regulatory authority or moral scruples, were among the first to harness the awesome power of AI to create bespoke and virtually unstoppable malware. However, having infiltrated the Dark Web forums where top professional cybercriminals discuss their trade, Sophos reports that the cybercrime sector has thoroughly tested the capabilities of AI and found it wanting.
In a startling revelation, Vikas Singla, the former COO of cybersecurity firm Securolytics, confessed to hacking two Georgia hospitals in June 2021 to enhance the company’s profile. Singla disrupted services at Gwinnett Medical Center hospitals, stealing patient data and publicizing the breach on Twitter. Facing 17 counts of computer damage and one count of information theft, Vikas Singla agreed to pay over $817,000 in restitution. Due to health issues, prosecutors recommended 57 months of probation, raising concerns about cyber threats jeopardizing public safety and healthcare data.
SlashNext's "State of Phishing Report for 2023" report stated the 1265% phishing increase in malicious phishing emails since Q4 2022, correlating to ChatGPT's launch. It was also reported that 31,000 phishing emails were sent on a daily basis in the past year, 68% of them being text-based Business Email Compromise (BEC).
US President Joe Biden has issued an executive order aimed at regulating artificial intelligence (AI), urging Congress to pass the necessary legislation as swiftly as possible. The announcement was made only 48 hours before tomorrow’s Global AI Summit in the UK, which US Vice President Kamala Harris will attend. The push to swiftly legislate indicates that the threat of AI is being taken seriously globally, with governments taking a coordinated approach. A mass of legislation and backroom deals with IT companies is surely set to follow.
The UK-hosted Artificial Intelligence (AI) Safety Summit due to take place on Wednesday and Thursday this week, attended by world leaders and AI experts, is set to become the focus of a widening global debate on the dangers of AI. Last Thursday, UK Prime Minister Rishi Sunak set out the agenda for the discussion, coming down heavily on the side of the AI doom-mongers, who once again are warning that AI poses an existential threat to humanity itself.
Google's Vulnerability Rewards Program (VRP), a program made to reward researchers who find system vulnerabilities, has been expanded for generative AI. Google explained the expansion of the VRP as a reaction to the risks brought by AI, and the magnified implications it has for traditional digital security.
Sign in to your account