Cyber Intelligence

Linkedin
  • News
    • Aerospace
    • Apple
    • Arrest
    • Automotive
    • Big Tech
    • Breaking News
    • Business Email Compromise
    • China
    • Chip Technology
    • Cryptocurrency
    • Cyber Budget
    • Cyber Espionage
    • Cyber M&A
    • cybercrime
    • Data Leak
    • deepfake
    • Energy Sector
    • Ethiopia
    • Finance
    • France
    • Geopolitics
    • Government
    • Hacktivism
    • Healthcare
    • Human Error
    • Investment Scam
    • Iran
    • Israel Conflict
    • Malicious Bots
    • Malware
    • North Korea
    • Norton
    • One Minute Roundup
    • ransomware
    • SEC
    • SMB
    • Social Media
    • Sri Lanka
    • Taiwan
    • VPN
    • Wire Fraud
    • Workforce Cyber
  • Analysis
  • Expert Opinions
  • Resources
    • Conferences
    • Glossary of terms
    • Awards
    • Ecosystem map
Reading: Security minefield ahead for GenAI users
Share
Cyber IntelligenceCyber Intelligence
Aa
  • News
  • Analysis
  • Expert Opinions
  • Resources
Search
  • News
    • Aerospace
    • Apple
    • Arrest
    • Automotive
    • Big Tech
    • Breaking News
    • Business Email Compromise
    • China
    • Chip Technology
    • Cryptocurrency
    • Cyber Budget
    • Cyber Espionage
    • Cyber M&A
    • cybercrime
    • Data Leak
    • deepfake
    • Energy Sector
    • Ethiopia
    • Finance
    • France
    • Geopolitics
    • Government
    • Hacktivism
    • Healthcare
    • Human Error
    • Investment Scam
    • Iran
    • Israel Conflict
    • Malicious Bots
    • Malware
    • North Korea
    • Norton
    • One Minute Roundup
    • ransomware
    • SEC
    • SMB
    • Social Media
    • Sri Lanka
    • Taiwan
    • VPN
    • Wire Fraud
    • Workforce Cyber
  • Analysis
  • Expert Opinions
  • Resources
    • Conferences
    • Glossary of terms
    • Awards
    • Ecosystem map

Cyber Intelligence

Linkedin
  • News
    • Aerospace
    • Apple
    • Arrest
    • Automotive
    • Big Tech
    • Breaking News
    • Business Email Compromise
    • China
    • Chip Technology
    • Cryptocurrency
    • Cyber Budget
    • Cyber Espionage
    • Cyber M&A
    • cybercrime
    • Data Leak
    • deepfake
    • Energy Sector
    • Ethiopia
    • Finance
    • France
    • Geopolitics
    • Government
    • Hacktivism
    • Healthcare
    • Human Error
    • Investment Scam
    • Iran
    • Israel Conflict
    • Malicious Bots
    • Malware
    • North Korea
    • Norton
    • One Minute Roundup
    • ransomware
    • SEC
    • SMB
    • Social Media
    • Sri Lanka
    • Taiwan
    • VPN
    • Wire Fraud
    • Workforce Cyber
  • Analysis
  • Expert Opinions
  • Resources
    • Conferences
    • Glossary of terms
    • Awards
    • Ecosystem map
Reading: Security minefield ahead for GenAI users
Share
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
AICybersecurity ExecutivesExpert Opinions

Security minefield ahead for GenAI users

Editorial Team
January 6, 2025 at 11:48 AM
By Editorial Team Editorial Team
Share
Gadi Bashvitz, CEO of Bright Security
SHARE
Gadi Bashvitz, CEO of Bright Security
Gadi Bashvitz, CEO of Bright Security

In an exclusive interview with Cyber Intelligence, Gadi Bashvitz, CEO of cybersecurity testing firm, Bright Security warns of the security challenges facing organizations in the wake of widespread adoption of GenAI.

Cyber Intelligence: Are there any specific dangers of which companies using GenAI to generate new code should be particularly aware?

Gadi Bashvitz: There are multiple considerations here. On one hand, any solution developed leveraging LLMs is prone to LLM-specific vulnerabilities such as Insecure Output Handling and Broken Access Control and it is critical to make sure organizations are aware and can detect such vulnerabilities before releasing LLM-based solutions. On the other hand, organizations leveraging GenAI code generation tools face a very different risk. The underlying problem is that AI-code generation technologies leverage a great deal of open-source code for training. This open-source content is full of security holes and very easy for bad actors to exploit resulting in the AI-generated code having these same issues. AI-generated code has four times more vulnerabilities than human-generated code. Organizations need to identify the cybersecurity gaps left by GenAI by using vulnerability-testing and remediation solutions.

Cyber Intelligence: How urgent a priority is this?

Gadi Bashvitz: Deploying GenAI-based SW without being aware, testing, and remediating the vulnerabilities it could introduce is like installing a very sophisticated alarm system and not arming it as you might be giving away the keys to the kingdom… If, for example, an organization is going to launch a new product, then it is essential to address potential and already-existing vulnerabilities at the pre-production stage. If, on the other hand, the product is in the post-production or launch phase, vulnerability assessment of AI-generated code is even more urgent. Companies also need to examine precisely what role GenAI may have in the development of their Application Programming Interface (API), and the rules and protocols that allow different software applications to communicate with each other. The consequences of not doing so could result in significant exposure to data breaches, or generate a raft of lawsuits and some very hefty future fines.

Cyber Intelligence: So what can organizations who may have adopted AI quickly do in order to comply with this fresh barrage of security directives relating to the adoption of GenAI?

Gadi Bashvitz: There are a number of steps organizations can take to improve their security posture. As organizations adopt GenAI tools or use GenAI to speed up coding, they need to build security into these processes and make sure they can identify those vulnerabilities before they are deployed to production and can be exploited by the rapidly growing number of bad actors worldwide. These may be financially motivated international gangs of cybercriminals or, increasingly, nation-state-backed bad actors intent on espionage and sabotage. Once the vulnerabilities are identified they need to be effectively remediated and the fix validated.

Cyber Intelligence: Exactly how and when is international regulation relating to GenAI going to impact organizations in,  for instance, the US?

Gadi Bashvitz: As of January 16, organizations across the world that carry any data on any European Union (EU) citizens will be obliged to comply with the EU’s new Digital Operational Resilience Act (DORA). DORA aims to strengthen the IT security of entities like banks, insurance companies, and investment firms, and companies that do not plug security gaps created by GenAI could face severe penalties. For example, DORA imposes strict by-design principles that GenAI is not yet capable of adhering to. It will also encompass the issue of intellectual copyright on any product or service – GenAI is also notorious for disregarding this.

Cyber Intelligence: In addition to DORA in January, is there any other international regulation about impacting organizations that may have recently begun to adopt GenAI?

Gadi Bashvitz: The EU ban on AI systems that pose an unacceptable risk also comes into force In February. The act clearly sets out a list of prohibited AI practices that pose an “unacceptable risk” to EU citizens’ safety or that are intrusive or discriminatory. This could create potential pitfalls for organizations that have already integrated the use of GenAI. Very recently, on December 16, 2024, The US National Cybersecurity Center of Excellence (NCCoE) also released the Draft NIST Internal Report (IR), which calls for a structured, tight risk-based approach to managing cybersecurity.

Cyber Intelligence: Thank you.

TAGGED: ai coding, ai cyber, ai generated code, ai regulation, ai report, ai risks, ai threats, ai vulnerability, artificial intelligence, bright security, cyber ai, cybercrime, Cybersecurity, data privacy, european union, gadi bashvitz, generative ai, international regulation, it security, large language model, LLM, llms, nist, us nccoe
Editorial Team February 3, 2025 January 6, 2025
Share This Article
Twitter LinkedIn Email Copy Link Print
Previous Article accounting and law firm threat Ransomware gangs target law and accountancy firms
Next Article gullible artificial intelligence as a critical flaw AI gives the game away
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editor's Pick

You Might Also Like

Newsransomware

Ransomware group offers cyber gangs legal advice

A new cybercriminal group, Qilin, is rapidly establishing dominance in the murky world of ransomware by providing not just ransomware-as-a-service (RaaS) but a full soup-to-nuts cybercrime service .In addition to the malware, Qilin also provides a full suite of legal guidance for criminals together with operational and storage features. According cybersecurity company, Cybereason, Qilin is positioning itself not just as a ransomware group, but as a full cybercrime service.

June 20, 2025
NewsOne Minute RoundupOne Minute Roundup

Japan Pushes Proactive Cyber Laws – June 17th

Japan Prime Minister Shigeru Ishiba implements a new cybersecurity defense law ahead of national elections to take proactive measures against cyber threats. The legislation allows the government to monitor online communications and requires private companies to report cyberattacks. 

June 17, 2025
AINews

Criminal use of AI enters new and dangerous phase

Cybercriminals have just added what may be the most dangerous weapon yet to their arsenal of illegal software, a Dark Web version of legitimate artificial intelligence (AI) platforms. Tel Aviv-based network security company, Cato Networks, has uncovered an emerging criminal platform called Nytheon AI that it says is “a fully-fledged illicit AI platform”. While there have been other attempts to offer criminal versions of popular AI models, Nytheon AI is the first truly comprehensive multilingual offering. Threat actors can now use the platform to conduct a variety of attacks including tailored spear-phishing campaigns, deepfake documents, and polymorphic malware capable of constantly mutating its appearance.

June 17, 2025
NewsOne Minute RoundupOne Minute Roundup

Geopolitical Tensions are Changing the Cybersecurity Landscape – June 13th

Political tensions are prompting nations to re-strategize cybersecurity. Countries that once sought international cooperation and joint strategies are now prioritizing domestic cyber capacities and national interests as a result of geopolitical instabilities.

June 13, 2025

Cyber Intelligence

We provide in-depth analysis, breaking news, and interviews with some of the leading minds in cybersecurity and distill critical insights that matter to our readers. Daily.

Linkedin

Category

  • Cybercrime
  • News

Quick Links

  • News
    • Aerospace
    • Apple
    • Arrest
    • Automotive
    • Big Tech
    • Breaking News
    • Business Email Compromise
    • China
    • Chip Technology
    • Cryptocurrency
    • Cyber Budget
    • Cyber Espionage
    • Cyber M&A
    • cybercrime
    • Data Leak
    • deepfake
    • Energy Sector
    • Ethiopia
    • Finance
    • France
    • Geopolitics
    • Government
    • Hacktivism
    • Healthcare
    • Human Error
    • Investment Scam
    • Iran
    • Israel Conflict
    • Malicious Bots
    • Malware
    • North Korea
    • Norton
    • One Minute Roundup
    • ransomware
    • SEC
    • SMB
    • Social Media
    • Sri Lanka
    • Taiwan
    • VPN
    • Wire Fraud
    • Workforce Cyber
  • Analysis
  • Expert Opinions
  • Resources
    • Conferences
    • Glossary of terms
    • Awards
    • Ecosystem map

© 2023 Cyberintel.media

Welcome Back!

Sign in to your account

Lost your password?