15.4 C
United States of America
Thursday, November 14, 2024

48% of Safety Professionals Consider AI Is Dangerous


A latest HackerOne survey make clear the rising considerations AI brings to the cybersecurity panorama. The report drew insights from 500 safety consultants, a neighborhood survey of two,000 members, suggestions from 50 clients, and anonymized platform knowledge.

Their most vital considerations associated to AI had been:

  • Leaked coaching knowledge (35%).
  • Unauthorized utilization (33%).
  • The hacking of AI fashions by outsiders (32%)

The survey additionally discovered that 48% consider AI poses essentially the most important safety threat to their group. These fears spotlight the pressing want for firms to reassess their AI safety methods earlier than vulnerabilities grow to be actual threats.

How the safety analysis neighborhood modified within the age of AI

The HackerOne report indicated that AI can pose a risk — and the safety neighborhood has been aiming to counter that risk. Amongst these surveyed, 10% of safety researchers focus on AI. In actual fact, 45% of safety leaders contemplate AI amongst their organizations’ best dangers. Knowledge integrity, specifically, was a priority.

“AI is even hacking different AI fashions,” mentioned Jasmin Landry, a safety researcher, and HackerOne pentester, also referred to as @jr0ch17, within the report.

Of these surveyed, 51% say primary safety practices are being missed as firms hurry to incorporate generative AI. Solely 38% of HackerOne clients felt assured in defending towards AI threats.

Mostly reported AI vulnerabilities embody logic errors and LLM immediate injection

As a safety platform, HackerOne has seen the variety of AI belongings included in its applications develop by 171% over the previous yr.

Essentially the most generally reported vulnerabilities in AI belongings are:

  • Basic AI security (similar to stopping AI from producing dangerous content material) (55%).
  • Enterprise logic errors (30%).
  • LLM immediate injection (11%).
  • LLM coaching knowledge poisoning (3%).
  • LLM delicate info disclosure (3%).

HackerOne emphasised the significance of the human ingredient in defending techniques from AI and preserving these instruments protected.

“Even essentially the most subtle automation can’t match the ingenuity of human intelligence,” mentioned Chris Evans, HackerOne CISO and chief hacking officer, in a press launch. “The 2024 Hacker-Powered Safety Report proves how important human experience is in addressing the distinctive challenges posed by AI and different rising applied sciences.”

SEE: For the third quarter in a row, executives are extra involved about AI-assisted assaults than every other risk, Gartner reported.

Outdoors AI, cross-site scripting issues happen essentially the most

Some issues haven’t modified: Cross-site scripting (XSS) and misconfigurations are the weaknesses most reported by the HackerOne neighborhood. The respondents contemplate penetration exams and bug bounties the very best methods to determine points.

AI tends to generate false positives for safety groups

Additional analysis from a HackerOne-sponsored SANS Institute report in September revealed that 58% of safety professionals consider that safety groups and risk actors might discover themselves in an “arms race” to leverage generative AI techniques and methods of their work.

Safety professionals within the SANS survey mentioned they’ve efficiently used AI to automate tedious duties (71%). Nonetheless, the identical individuals acknowledged that risk actors might exploit AI to make their operations extra environment friendly. Particularly, respondents “had been most involved with AI-powered phishing campaigns (79%) and automatic vulnerability exploitation (74%).”

SEE: Safety leaders are getting pissed off with AI-generated code.

“Safety groups should discover the very best purposes for AI to maintain up with adversaries whereas additionally contemplating its current limitations — or threat creating extra work for themselves,” Matt Bromiley, an analyst on the SANS Institute, mentioned in a press launch.

The answer? AI implementations ought to bear an exterior assessment. Over two-thirds of these surveyed (68%) selected “exterior assessment” as the simplest approach to determine AI security and safety points.

“Groups at the moment are extra real looking about AI’s present limitations” than they had been final yr, mentioned HackerOne Senior Options Architect Dane Sherrets in an electronic mail to TechRepublic. “People convey numerous essential context to each defensive and offensive safety that AI can’t replicate fairly but. Issues like hallucinations have additionally made groups hesitant to deploy the expertise in important techniques. Nonetheless, AI continues to be nice for growing productiveness and performing duties that don’t require deep context.”

Additional findings from the SANS 2024 AI Survey, launched this month, embody:

  • 38% plan to undertake AI inside their safety technique sooner or later.
  • 38.6% of respondents mentioned they’ve confronted shortcomings when utilizing AI to detect or reply to cyber threats.
  • 40% cite authorized and moral implications as a problem to AI adoption.
  • 41.8% of firms have confronted pushback from staff who don’t belief AI selections, which SANS speculates is “attributable to lack of transparency.”
  • 43% of organizations at present use AI inside their safety technique.
  • AI expertise inside safety operations is most frequently utilized in anomaly detection techniques (56.9%), malware detection (50.5%), and automatic incident response (48.9%).
  • 58% of respondents mentioned AI techniques wrestle to detect new threats or reply to outlier indicators, which SANS attributes to a scarcity of coaching knowledge.
  • Of those that reported shortcomings with utilizing AI to detect or reply to cyber threats, 71% mentioned AI generated false positives.

HackerOne’s suggestions for enhancing AI safety

HackerOne recommends:

  • Usually testing, validation, verification, and analysis all through an AI mannequin’s life cycle — from coaching to deployment and use.
  • Researching whether or not authorities or industry-specific AI compliance necessities are related to your group and establishing an AI governance framework.

HackerOne additionally strongly really helpful that organizations talk about generative AI brazenly and supply coaching on related safety and moral points.

HackerOne launched some survey knowledge in September and the total report in November. This up to date article considers each.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles