We’re excited to announce the second version of the Databricks AI Safety Framework (DASF 2.0—obtain now)! Organizations racing to harness AI’s potential want each the ‘gasoline’ of innovation and the ‘brakes’ of governance and threat administration. The DASF bridges this hole, enabling safe and impactful AI deployments in your group by serving as a complete information on AI threat administration.
This weblog will present an summary of the DASF, discover key insights gained for the reason that unique model was launched, introduce new assets to deepen your understanding of AI safety and supply updates on our {industry} contributors.
What’s the Databricks AI Safety Framework, and what’s new in model 2.0?
The DASF is a framework and whitepaper for managing AI safety and governance dangers. It enumerates the 12 canonical AI system elements, their respective dangers, and actionable controls to mitigate every threat. Created by the Databricks Safety and ML groups in partnership with {industry} specialists, it bridges the hole between enterprise, information, governance, and safety groups with sensible instruments and actionable methods to demystify AI, foster collaboration, and guarantee efficient implementation.
Not like different frameworks, the DASF 2.0 builds on current requirements to offer an end-to-end threat profile for AI deployments. It delivers defense-in-depth controls to simplify AI threat administration in your group to operationalize and will be utilized to your chosen information and AI platform.
Within the DASF 2.0, we’ve recognized 62 technical safety dangers and mapped them to 64 really helpful controls for managing the chance of AI fashions. We’ve additionally expanded mappings to main {industry} AI threat frameworks and requirements, together with MITRE ATLAS, OWASP LLM & ML High 10, NIST 800-53, NIST CSF, HITRUST, ENISA’s Securing ML Algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.
Operationalizing the DASF – take a look at the brand new compendium and the companion educational video!
We’ve acquired precious suggestions as we share the DASF at {industry} occasions, workshops, and buyer conferences. A lot of you could have requested for extra assets to make navigating the DASF simpler, operationalizing it, and mapping your controls successfully.
In response, we’re excited to announce the discharge of the DASF compendium doc (Google sheet, Excel). This useful resource is designed to assist operationalize the DASF by organizing and making use of its dangers, threats, controls, and mappings to industry-recognized requirements from organizations corresponding to MITRE, OWASP, NIST, ISO, HITRUST, and extra. We’ve additionally created a companion educational video that gives a guided walkthrough of the DASF and its compendium.
Our objective with these updates is to make the DASF simpler to undertake, empowering organizations to implement AI methods securely and confidently. In the event you’re desperate to dive in, our workforce recommends the next strategy:
- Perceive your stakeholders, deployment fashions, and AI use circumstances: Begin with a enterprise use case, leveraging the DASF whitepaper to determine the best-fit AI deployment mannequin. Select from 80+ Databricks Answer Accelerators to information your journey. Deployment fashions embody Predictive ML Fashions, Basis Mannequin APIs, Tremendous-tuned and Pre-trained LLMs, RAG, AI Brokers with LLMs, and Exterior Fashions. Guarantee readability on AI growth inside your group, together with use circumstances, datasets, compliance wants, processes, purposes, and accountable stakeholders.
- Assessment the 12 AI system elements and 62 dangers: Perceive the 12 AI methods elements, the standard cybersecurity and novel AI safety dangers related to every part, and the accountable stakeholders (e.g., information engineers, scientists, governance officers, and safety groups). Use the DASF to foster collaboration throughout these teams all through the AI lifecycle.
- Assessment the 64 accessible mitigation controls: Every threat is mapped to prioritized mitigation controls, starting with perimeter and information safety. These dangers and controls are additional aligned with 10 {industry} requirements, offering further element and readability.
- Use the DASF compendium to localize dangers, management applicability, and threat impacts: Begin through the use of the “DASF Threat Applicability” tab to determine dangers related to your use case by choosing a number of AI deployment fashions. Subsequent, overview the related threat impacts, compliance necessities, and mitigation controls. Lastly, doc key particulars in your use case, together with the AI use case description, datasets, stakeholders, compliance issues, and purposes.
- Implement the prioritized controls: Use the “DASF Management Applicability” tab of the compendium to overview the relevant DASF controls and implement the mitigation controls in your information platform throughout 12 AI elements. In case you are utilizing Databricks, we included hyperlinks with detailed directions on easy methods to deploy every management on our platform.
Implement the DASF in your group with new AI upskilling assets from Databricks
In accordance with a latest Economist Affect research, surveyed information and AI leaders have recognized upskilling and fostering a progress mindset as key priorities for driving AI adoption in 2025. As a part of the DASF 2.0 launch, we’ve assets that will help you perceive AI and ML ideas and apply AI safety greatest practices to your group.
- Databricks Academy Coaching: We advocate taking the brand new AI Safety Fundamentals course, which is now accessible on the Databricks Academy. Earlier than diving into the whitepaper, this 1-hour course is a superb primer to AI safety matters highlighted within the DASF. You’ll additionally obtain an accreditation in your LinkedIn profile upon completion. In case you are new to AI and ML ideas, begin with our Generative AI Fundamentals course.
- How-to movies: We’ve recorded DASF overview and how-to movies for fast consumption. Yow will discover these movies on our Safety Greatest Practices YouTube channel.
- In-person or digital workshop: Our workforce affords an AI Threat Workshop as a reside walkthrough of the ideas outlined within the DASF, specializing in overcoming obstacles to operationalizing AI threat administration. This half-day occasion targets Director+ leaders in governance, information, privateness, authorized, IT and safety capabilities.
- Deployment assist: The Safety Evaluation Software (SAT) displays adherence to safety greatest practices in Databricks workspaces on an ongoing foundation. We not too long ago upgraded the SAT to streamline setup and improve checks, aligning them with the DASF for improved protection of AI safety dangers.
- DASF AI assistant: Databricks clients can configure Databricks AI Safety Framework (DASF) AI assistant proper in their very own workspace with no prior Databricks abilities, work together with DASF content material in easy human language, and get solutions.
Constructing a neighborhood with AI {industry} teams, clients, and companions
Guaranteeing that the DASF evolves in line with the present AI regulatory surroundings and rising menace panorama is a high precedence. For the reason that launch of 1.0, we’ve shaped an AI working group of {industry} colleagues, clients, and companions to remain intently aligned with these developments. We need to thank our colleagues within the working group and our pre-reviewers like Complyleft, The FAIR Institute, Ethriva Inc, Arhasi AI, Carnegie Mellon College, and Rakesh Patil from JPMC. Yow will discover the whole record of contributors within the acknowledgments part of the DASF. If you wish to take part within the DASF AI Working Group, please contact our workforce at [email protected].
Right here’s what a few of our high advocates must say:
“AI is revolutionizing healthcare supply via improvements just like the CLEVER GenAI pipeline, which processes over 1.5 million medical notes every day to categorise key social determinants and impacting veteran care. This pipeline is constructed with a powerful safety basis, incorporating NIST 800-53 controls and leveraging the Databricks AI Safety Framework to make sure compliance and mitigate dangers. Trying forward, we’re exploring methods to increase these capabilities via Infrastructure as Code and safe containerization methods, enabling brokers to be dynamically deployed and scaled from repositories whereas sustaining rigorous safety requirements.” – Joseph Raetano, Synthetic Intelligence Lead, Summit Knowledge Analytics & AI Platform, U.S. Division of Veteran Affairs
“DASF is the important instrument in remodeling AI threat quantification into an operational actuality. With the FAIR-AI Threat strategy now in its second yr, DASF 2.0 permits CISOs to bridge the hole between cybersecurity and enterprise technique—talking a standard language grounded in measurable monetary impression.” – Jacqueline Lebo, Founder AI Workgroup, The FAIR Institute and Threat Advisory Supervisor, Secure Safety
“As AI continues to remodel industries, securing these methods from subtle and distinctive cybersecurity assaults is extra important than ever. The Databricks AI Safety Framework is a superb asset for corporations to guide from the entrance on each innovation and safety. With the DASF, corporations are outfitted to raised perceive AI dangers, and discover the instruments and assets to mitigate these dangers as they proceed to innovate.” – Ian Swanson, CEO, Shield AI
“With the Databricks AI Safety Framework, we’re capable of mitigate AI dangers thoughtfully and transparently, which is invaluable for constructing board and worker belief. It’s a sport changer that enables us to deliver AI into the enterprise and be among the many 15% of organizations getting AI workloads to manufacturing safely and with confidence.” — Coastal Neighborhood Financial institution
“Throughout the context of knowledge and AI, conversations round safety are few. The Databricks AI Safety Framework addresses the customarily uncared for aspect of AI and ML work, serving each as a best-in-class information for not solely understanding AI safety dangers, but in addition easy methods to mitigate them.” – Josue A. Bogran, Architect at Kythera Labs & Advisor to SunnyData.ai
“We’ve used the Databricks AI Safety Framework to assist improve our group’s safety posture for managing ML and AI safety dangers. With the Databricks AI Safety Framework, we are actually extra assured in exploring potentialities with AI and information analytics whereas making certain we’ve the correct information governance and safety measures in place.” – Muhammad Shami, Vice President, Jackson Nationwide Life Insurance coverage Firm
Obtain the Databricks AI Safety Framework 2.0 at this time!
The Databricks AI Safety Framework 2.0 and its compendium (Google sheet, Excel) are actually accessible for obtain. To find out about upcoming AI Threat workshops or to request a devoted in-person or digital workshop in your group, contact us at [email protected] or your account workforce. We even have further thought management content material coming quickly to offer additional insights into managing AI governance. For extra insights on easy methods to handle AI safety dangers, go to the Databricks Safety and Belief Heart.