DeepSeek, a Chinese AI company, has recently released its DeepSeek-R1 AI model. After its release, DeepSeek-R1 became the most downloaded app on the App Store. Now, Microsoft has planned to bring the Chinese DeepSeek-R1 model to its Azure AI Foundry platform and GitHub. DeepSeek for Copilot+ PCs is now available on Azure and GitHub.
DeepSeek is available for Copilot+ PCs on Azure and GitHub
DeepSeek-R1, which significantly impacted the US financial market this week, is now joining a diverse portfolio of over 1800 models by becoming a part of the model catalog on Azure AI Foundry and GitHub.
As part of Azure AI Foundry, DeepSeek R1 is accessible on a trusted, scalable, and enterprise-ready platform, enabling businesses to seamlessly integrate advanced AI while meeting SLAs, security, and responsible AI commitments.
One of the key advantages of using the DeepSeek-R1 model on Azure AI Foundry is the speed at which developers can experiment and integrate AI into their workflows. Moreover, the built-in evaluation tools will also help developers quickly compare outputs, benchmark performance, and scale AI-powered applications.
Using DeepSeek in model catalog in Azure AI Foundry
To use DeepSeek in the model catalog on Azure AI Foundry, log in to your Azure account and search for DeepSeek-R1 in the model catalog. Now, open the model card in the model catalog on Azure AI Foundry and then click on the Deploy button. An API key will be generated. You can use this API key with various clients.
Using DeepSeek on Copilot+ PCs
To use DeepSeek on your Copilot+ PC, you need to download the AI Toolkit VS Code extension. You can download the DeepSeek model locally on your PC by clicking on the Download button.
For Copilot+ PCs, the DeepSeek model will be optimized in the ONNX QDQ format and soon be available in the AI Toolkit’s model catalog, which can be pulled directly from Azure AI Foundry. In addition to the ONNX model, Coppilot+ PC users can also try the cloud-hosted source model in Azure Foundry.
Microsoft also says that the Qwen 1.5b model of DeepSeek cannot be mapped directly to the NPU due to the presence of dynamic input shapes and behavior. Hence, optimizations should be needed to make it compatible. Additionally, Microsoft is using the ONNX QDQ optimized format of DeepSeek to enable scaling across a variety of NPUs on Copilot+ PCs.
To get more information on running the distilled DeepSeek-R1 models locally on Copilot+ PCs, you can visit the official Microsoft website.