Organizations are both already adopting GenAI options, evaluating methods for integrating these instruments into their enterprise plans, or each. To drive knowledgeable decision-making and efficient planning, the supply of onerous knowledge is crucial—but such knowledge stays surprisingly scarce.
The “Enterprise GenAI Knowledge Safety Report 2025” by LayerX delivers unprecedented insights into the sensible utility of AI instruments within the office, whereas highlighting essential vulnerabilities. Drawing on real-world telemetry from LayerX’s enterprise shoppers, this report is likely one of the few dependable sources that particulars precise worker use of GenAI.
As an illustration, it reveals that almost 90% of enterprise AI utilization happens exterior the visibility of IT, exposing organizations to vital dangers akin to knowledge leakage and unauthorized entry.
Under we deliver a few of the report’s key findings. Learn the total report back to refine and improve your safety methods, leverage data-driven decision-making for threat administration, and evangelize for assets to boost GenAI knowledge safety measures.
To register to a webinar that can cowl the important thing findings on this report, click on right here.
Use of GenAI within the Enterprise is Informal at Most (for Now)
Whereas the GenAI hype could make it appear to be your complete workforce has transitioned their workplace operations to GenAI, LayerX finds the precise use a tad extra lukewarm. Roughly 15% of customers entry GenAI instruments every day. This isn’t a proportion to be ignored, however it’s not the bulk.
But. Right here at The New Stack we concur with LayerX’s evaluation, predicting this development will speed up shortly. Particularly since 50% of customers at the moment use GenAI each different week.
As well as, they discover that 39% of standard GenAI instrument customers are software program builders, that means that the very best potential of information leakage by GenAI is of supply and proprietary code, in addition to the chance of utilizing dangerous code in your codebase.
How is GenAI Being Used? Who Is aware of?
Since LayerX is located within the browser, the instrument has visibility into using Shadow SaaS. This implies they’ll see workers utilizing instruments that weren’t authorized by the group’s IT or by non-corporate accounts.
And whereas GenAI instruments like ChatGPT are used for work functions, almost 72% of workers entry them by their private accounts. If workers do entry by company accounts, solely about 12% is completed with SSO. Consequently, almost 90% of GenAI utilization is invisible to the group. This leaves organizations blind to ‘shadow AI’ purposes and the unsanctioned sharing of company data on AI instruments.
50% of Pasting Exercise intoGenAI Consists of Company Knowledge
Bear in mind the Pareto precept? On this case, whereas not all customers use GenAI every day, customers who do paste into GenAI purposes, accomplish that steadily and of doubtless confidential data.
LayerX discovered that pasting of company knowledge happens virtually 4 instances a day, on common, amongst customers who submit knowledge to GenAI instruments. This might embrace enterprise data, buyer knowledge, monetary plans, supply code, and many others.
The right way to Plan for GenAI Utilization: What Enterprises Should Do Now
The findings within the report sign an pressing want for brand spanking new safety methods to handle GenAI threat. Conventional safety instruments fail to handle the trendy AI-driven office the place purposes are browser-based. They lack the flexibility to detect, management, and safe AI interactions on the supply—the browser.
Browser-based safety offers visibility into entry to AI SaaS purposes, unknown AI purposes past ChatGOT, AI-enabled browser extensions, and extra. This visibility can be utilized to make use of DLP options for GenAI, permitting enterprises to soundly embrace GenAI of their plans, future-proofing their enterprise.
To entry extra knowledge on how GenAI is getting used, learn the full report.