0.8 C
United States of America
Saturday, February 1, 2025

DeepSeek R1 on Databricks | Databricks Weblog


Deepseek-R1 is a state-of-the-art open mannequin that, for the primary time, introduces the ‘reasoning’ functionality to the open supply group. Specifically, the discharge additionally contains the distillation of that functionality into the Llama-70B and Llama-8B fashions, offering a beautiful mixture of pace, cost-effectiveness, and now ‘reasoning’ functionality. We’re excited to share how one can simply obtain and run the distilled DeepSeek-R1-Llama fashions in Mosaic AI Mannequin Serving, and profit from its safety, best-in-class efficiency optimizations, and integration with the Databricks Knowledge Intelligence Platform. Now with these open ‘reasoning’ fashions, construct agent techniques that may much more intelligently purpose in your information.

Playground demo

Deploying Deepseek-R1-Distilled-Llama Fashions on Databricks

To obtain, register, and deploy the Deepseek-R1-Distill-Llama fashions on Databricks, use the pocket book included right here, or observe the simple directions under:

 

1. Spin up the mandatory compute¹ and cargo the mannequin and its tokenizer:

This course of ought to take a number of minutes as we obtain 32GB value of mannequin weights within the case of Llama 8B. 

 

2. Then, register the mannequin and the tokenizer as a transformers mannequin. mlflow.transformers makes registering fashions in Unity Catalog easy – simply configure your mannequin measurement (on this case, 8B) and the mannequin title.

1  We used ML Runtime 15.4 LTS and a g4dn.4xlarge single node cluster for the 8B mannequin and a g6e.4xlarge for the 70B mannequin. You don’t want GPU’s per-se to deploy the mannequin inside the pocket book so long as the compute used has ample reminiscence capability.

 

3. To serve this mannequin utilizing our extremely optimized Mannequin Serving engine, merely navigate to Serving and launch an endpoint together with your registered mannequin!

Select served entity

As soon as the endpoint is prepared, you’ll be able to simply question the mannequin through our API, or use the Playground to begin prototyping your functions.

Playground demo

With Mosaic AI Mannequin Serving, deploying this mannequin is each easy, however highly effective, making the most of our best-in-class efficiency optimizations in addition to integration with the Lakehouse for governance and safety.

When to make use of reasoning fashions

One distinctive side of the Deepseek-R1 collection of fashions is their potential for prolonged chain-of-thought (CoT), much like the o1 fashions from OpenAI. You’ll be able to see this in our Playground UI, the place the collapsible “Pondering” part exhibits the CoT traces of the mannequin’s reasoning. This might result in larger high quality solutions, notably for math and coding, however at the results of considerably extra output tokens. We additionally advocate customers observe Deepseek’s Utilization Pointers in interacting with the mannequin.

These are early innings in figuring out the best way to use reasoning fashions, and we’re excited to listen to what new information intelligence techniques our prospects can construct with this functionality. We encourage our prospects to experiment with their very own use instances and tell us what you discover. Look out for added updates within the coming weeks as we dive deeper into R1, reasoning, and the best way to construct information intelligence on Databricks.

Assets

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles