Most organizations have no clear idea of the value of the data they hold on themselves and their customers. According to technology research and consulting firm Gartner, 30 percent of chief data and analytics officers (CDAOs) say that their top challenge is the inability to measure data, analytics, and AI's impact on business outcomes. Gartner also reports that only 22 percent of organizations surveyed have defined, tracked, and communicated business impact metrics for the bulk of their data and analytics (D&A) use cases. “There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, etc., but there are few who can substantiate it,” said Michael Gabbard, senior director analyst at Gartner.
Gartner issued a stern warning this week to organizations across all sectors that the cost of introducing artificial intelligence (AI) to the workplace could easily balloon by 500 -1,000 percent. Speaking at Gartner's flagship Symposium event in Australia, VP analyst Mary Mesaglio said: “Factors contributing to these inflated costs include vendor price increases and neglecting the expense of utilizing cloud-based resources.”
Bereft of fresh ideas or new products, Apple’s main offering at its long-awaited annual Worldwide Developer's Conference in Cupertino, California, is a cobbled-together artificial intelligence (AI) offering. While AI may be Silicon Valley’s latest buzzword and marketing tool, “Apple Intelligence,” as Apple AI is branded, is already attracting heavy criticism – even from other tech giants. By pairing Microsoft-backed OpenAI’s ChatGPT with Apple’s voice-activated assistant, Siri, Apple hopes to make AI mainstream. But its critics say that all Apple has done is create a cybersecurity nightmare for corporations while sounding a death knell for the personal privacy of Apple users. "It's patently absurd that Apple isn't smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!... Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river,” says Elon Musk, Tesla and SpaceX founder and the owner of X Corp, formerly Twitter.
Apple has joined Google and Microsoft in launching its own generative artificial intelligence (AI) offering, OpenELM. Apple claims that OpenELM, “a state-of-the-art open language model,” will offer users more accurate and less misleading results than its widely criticized competitors. “OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy,” says Apple. Apple claims that OpenELM exhibits a 2.36 percent improvement in accuracy compared to its initial predecessor OLMo, while requiring half as many pre-training tokens. So far, Apple has delayed offering modern AI capabilities on its devices, but it is expected that the next version of its operating systems will need to include some unique AI features. The launch of iOS 18 is scheduled for June 10.
Silicon Valley’s tech giants are fond of publicizing their green credentials by installing everything from waterless urinals to solar power. But, according to a new report from the International Energy Agency (IEA), tech giants’ latest offerings, primarily artificial intelligence (AI), are driving energy consumption to unprecedented levels. The report, Electricity 2024 Analysis and Forecast to 2026, predicts that, if current trends continue, AI and cryptocurrency power consumption could more than double from 460 TWh in 2022 to up to 1,050 TWh in 2026, roughly equivalent to adding another Germany to global electricity consumption. According to the IEA, there are currently over 8,000 data centers globally, with about 33% of these located in the United States, with the largest data center hubs located in California, Texas, and Virginia.
OpenAI, the maker of Microsoft-backed consumer-facing artificial intelligence (AI) service ChatGPT, may have scored something of an own-goal with the unveiling of Voice Engine, billed as “a model for creating custom voices”. While OpenAI’s blog on Friday highlights the legitimate use of voice cloning, sometimes referred to as ‘deepfake voice’, such as providing reading assistance to non-readers and children, its widespread availability could soon metamorphose into a cybersecurity nightmare. Deepfake voice and video software are already being used by cybercriminals to mimic the voices of senior executives to commit financial fraud and other crimes. But the widespread availability and marketing of deepfake voice software is now set to make cybercrime a virtual cottage industry where any number can play. It will open the floodgates to a whole new generation of cybercriminals, terrorists, pranksters, and disgruntled employees.
Ever since the launch of the deeply flawed Microsoft-backed public-facing artificial intelligence (AI) service ChatGPT at the end of 2022, AI has been used to power a whole range of services. But the days of marketing and PR departments simply attaching the words “AI-driven” to over-hype any digital offering in the hope of attracting investors and customers are now hopefully coming to an end. Earlier this week, the US Securities and Exchange Commission (SEC) fined two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., a total of US$400,000 between them. The SEC’s order against Global Predictions alleged that the San Francisco-based firm made false and misleading claims in 2023 on its website and on social media about its purported use of AI. The order against Toronto-based Delphia alleged that the firm had made false and misleading statements in its SEC filings, in a press release, and on its website regarding its purported use of AI and machine learning.
According to Salt Labs research, third-party OpenAI ChatGPT plugin security flaws could allow attackers to install malicious plugins, and hijack third-party website accounts. Leveraging security gaps in ChatGPT plugins' large language models (LLMs), OAuth workflow, and PluginLab both feature weaponizable vulnerabilities.
Companies using public artificial intelligence (AI) services such as Microsoft-backed ChatGPT are at increasing risk of allowing cybercriminals to access confidential data. According to cybersecurity firm Group-IB’s Hi-Tech Crime Trends Report 2023/2024, between June and October of 2023, over 130,000 unique hosts with access to OpenAI were compromised, representing a 36 percent rise over the first five months of the year. Companies currently take one of two main approaches to integrating AI into workflows. One is to use public AI models and the second is to create bespoke proprietary AI systems based on pre-trained and available models. The second approach is by far the safest as it helps control data exchange with AI systems at every stage, guaranteeing confidentiality. But this is far more expensive and labor-intensive than using more insecure publicly available AI services.
The reaction of businesses to the introduction of generative AI (GenAI) in the year since the launch of Microsoft-backed ChatGPT is one of increasing suspicion and disappointment. Over one in four organizations have banned the use of GenAI outright. The majority of companies are now also refusing to trust a technology that has already gained a reputation for making errors and even entirely fabricating information, a failing that is referred to as “hallucinating”. According to Cisco’s newly-released 2024 Data Privacy Benchmark Study, 68 percent of organizations mistrust GenAI because it gets results wrong and 69 percent also believe it could hurt their company’s legal rights. The study draws on responses from 2,600 privacy and security professionals across 12 geographies.
The verdict on artificial intelligence (AI) from the real experts is finally in; professional cybercriminal fraternities have judged AI to be “overrated, overhyped and redundant,” according to fresh research from cybersecurity firm Sophos. It has, hitherto, been accepted wisdom in the cybersecurity industry that cybercriminals, free from any regulatory authority or moral scruples, were among the first to harness the awesome power of AI to create bespoke and virtually unstoppable malware. However, having infiltrated the Dark Web forums where top professional cybercriminals discuss their trade, Sophos reports that the cybercrime sector has thoroughly tested the capabilities of AI and found it wanting.
SlashNext's "State of Phishing Report for 2023" report stated the 1265% phishing increase in malicious phishing emails since Q4 2022, correlating to ChatGPT's launch. It was also reported that 31,000 phishing emails were sent on a daily basis in the past year, 68% of them being text-based Business Email Compromise (BEC).
Sign in to your account