3.8 C
United States of America
Saturday, November 23, 2024

How AI Is Altering the Cloud Safety and Threat Equation


The AI growth is amplifying dangers throughout enterprise information estates and cloud environments, based on cybersecurity professional Liat Hayun.

In an interview with TechRepublic, Hayun, VP of product administration and analysis for cloud safety at Tenable, suggested organisations to prioritise understanding their threat publicity and tolerance, whereas prioritising tackling key issues like cloud misconfigurations and defending delicate information.

Profile photo of Liat Hayun.
Liat Hayun, VP of product administration and analysis of cloud safety at Tenable

She famous that whereas enterprises stay cautious, AI’s accessibility is accentuating sure dangers. Nevertheless, she defined that CISOs at present are evolving into enterprise enablers — and AI may in the end function a strong device for bolstering safety.

How AI is affecting cybersecurity, information storage

TechRepublic: What’s altering within the cybersecurity setting on account of AI?

Liat: To start with, AI has change into way more accessible to organisations. When you look again 10 years in the past, the one organisations creating AI needed to have this specialised information science group that had PhDs in information science and statistics to have the ability to create machine studying and AI algorithms. AI has change into a lot simpler for organisations to create; it’s nearly similar to introducing a brand new programming language or new library into their setting. So many extra organisations — not simply massive organisations like Tenable and others — but in addition any start-ups can now leverage AI and introduce that into their merchandise.

SEE: Gartner Tells Australian IT Leaders To Undertake AI At Their Personal Tempo

The second factor: AI requires a whole lot of information. So many extra organisations want to gather and retailer increased volumes of information, which additionally generally has increased ranges of sensitivity. Earlier than, my streaming service would have solely saved only a few particulars on me. Now, perhaps my geography issues, as a result of they will create extra particular suggestions based mostly on that, or my age and my gender, and so forth. As a result of they will now use this information for his or her enterprise functions — to generate extra enterprise — they’re now way more motivated to retailer that information in increased volumes and with rising ranges of sensitivity.

TechRepublic: Is that feeding into rising utilization of the cloud?

Liat: If you wish to retailer a whole lot of information, it’s a lot simpler to do this within the cloud. Each time you determine to retailer a brand new kind of information, it will increase the amount of information you’re storing. You don’t need to go inside your information heart and order new volumes of information to put in. You simply click on, and bam, you will have a brand new information retailer location. So the cloud has made it a lot simpler to retailer information.

These three elements type a sort of circle that feeds itself. As a result of if it’s simpler to retailer information, you’ll be able to improve extra AI capabilities, and you then’re motivated to retailer much more information, and so forth. In order that’s what occurred on this planet in the previous few years — since LLMs have change into a way more accessible, widespread functionality for organisations — introducing challenges throughout all these three verticals.

Understanding the safety dangers of AI

TechRepublic: Are you seeing particular cybersecurity dangers rise with AI?

Liat: Using AI in organisations, in contrast to the usage of AI by particular person individuals internationally, remains to be in its early phases. Organisations wish to make it possible for they’re introducing it in a method that, I might say, doesn’t create any pointless threat or any excessive threat. So by way of statistics, we nonetheless solely have a couple of examples, and they aren’t essentially illustration as a result of they’re extra experimental.

One instance of a threat is AI being skilled on delicate information. That’s one thing we’re seeing. It’s not as a result of organisations usually are not being cautious; it’s as a result of it’s very troublesome to separate delicate information from non-sensitive information and nonetheless have an efficient AI mechanism that’s skilled on the best information set.

The second factor we’re seeing is what we name information poisoning. So, even when you’ve got an AI agent that’s being skilled on non-sensitive information, if that non-sensitive information is publicly uncovered, as an adversary, as an attacker, I can insert my very own information into that publicly uncovered, publicly accessible information storage and have your AI say issues that you just didn’t intend it to say. It’s not this all-knowing entity. It is aware of what it’s seen.

TechRepublic: How ought to organisations weigh the safety dangers of AI?

Liat: First, I might ask how organisations can perceive the extent of publicity they’ve, which incorporates the cloud, AI, and information … and all the things associated to how they use third-party distributors, and the way they leverage completely different software program of their organisation, and so forth.

SEE: Australia Proposes Obligatory Guardrails for AI

The second half is, how do you establish the crucial exposures? So if we all know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s one thing that you just most likely wish to deal with first. But it surely’s additionally a mix of the impression, proper? If in case you have two points which might be very related, and one can compromise delicate information and one can not, you wish to deal with that first [issue] first.

You additionally need to know which steps to take to deal with these exposures with minimal enterprise impression.

TechRepublic: What are some large cloud safety dangers you warn towards?

Liat: There are three issues we normally advise our prospects.

The primary one is on misconfigurations. Simply due to the complexity of the infrastructure, complexity of the cloud, and all of the applied sciences it gives, even should you’re in a single cloud setting — however particularly should you’re going multi-cloud — the probabilities of one thing changing into a problem simply because it wasn’t configured appropriately remains to be very excessive. In order that’s undoubtedly one factor I might give attention to, particularly when introducing new applied sciences like AI.

The second is over-privileged entry. Many individuals assume their organisation is tremendous safe. But when your own home is a fort, and also you’re giving your keys out to everybody round you, that’s nonetheless a problem. So extreme entry to delicate information, to crucial infrastructure, is one other space of focus. Even when all the things is configured completely and also you don’t have any hackers in your setting, it introduces extra threat.

The facet individuals take into consideration probably the most is to establish malicious or suspicious exercise as early because it occurs. That is the place AI might be taken benefit of; as a result of if we leverage AI instruments inside our safety instruments inside our infrastructure, we are able to use the truth that they will have a look at a whole lot of information, they usually can do that basically quick, to have the ability to additionally establish suspicious or malicious behaviors in an setting. So we are able to deal with these behaviors, these actions, as early as potential earlier than something crucial is compromised.

Implementing AI ‘too good of a possibility to overlook out on’

TechRepublic: How are CISOs approaching the dangers you’re seeing with AI?

Liat: I’ve been within the cybersecurity trade for 15 years now. What I like seeing is most safety consultants, most CISOs, are in contrast to what they was once like a decade in the past. Versus being a gatekeeper, versus saying, “No, we are able to’t use this as a result of it’s dangerous,” they’re asking themselves, “How can we use this and make it much less dangerous?” Which is an superior development to see. They’re changing into extra of an enabler.

TechRepublic: Are you seeing the great aspect of AI, in addition to the dangers?

Liat: Organisations have to assume extra about how they’re going to introduce AI, fairly than considering “AI is simply too dangerous proper now”. You’ll be able to’t do this.

Organisations that don’t introduce AI within the subsequent couple of years will simply keep behind. It’s a tremendous device that may profit so many enterprise use instances, internally for collaboration and evaluation and insights, and externally, for the instruments we are able to present our prospects. There’s simply too good of a possibility to overlook out on. If I may also help organisations obtain that mindset the place they are saying, “OK, we are able to use AI, however we simply have to take these dangers into consideration,” I’ve carried out my job.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles