You’ve probably heard the buzz about DeepSeek R1. It’s an open-source AI model being compared to top-tier proprietary models like OpenAI’s o1. More than that, it’s a reasoning model meaning it uses a chain of thought processes to analyze problems and its own answers logically and then slowly arrive at an answer. This approach helps the AI model generate more accurate responses, solving complex questions that require serious reasoning skills. As it’s an open-source model, for the first time, you can install a reasoning AI model on your PC and run it offline. No need to worry about privacy.

In this guide, I’ll show you how to set up DeepSeek R1 locally, even if this is your first time and you’re new to running AI models. The steps are the same for Mac, Windows, or Linux.
Models You Can Install and Pre-requisites
DeepSeek R1 is available in different sizes. While running the largest 671B parameter model isn’t feasible for most machines, smaller, distilled versions can be installed locally on your PCs. Note that running AI models locally is resource-intensive requiring storage space, RAM, and GPU power. Each model has specific hardware requirements and here’s a quick overview:
Model | Parameters (B) | Disk Space | RAM | Recommended GPU |
---|---|---|---|---|
DeepSeek-R1-Distill-Qwen-1.5B | 1.5B | 1.1 GB | ~3.5 GB | NVIDIA RTX 3060 12GB or higher |
DeepSeek-R1-Distill-Qwen-7B | 7B | 4.7GB | ~16 GB | NVIDIA RTX 4080 16GB or higher |
DeepSeek-R1-Distill-Llama-8B | 8B | 4.9GB | ~18 GB | NVIDIA RTX 4080 16GB or higher |
DeepSeek-R1-Distill-Qwen-14B | 14B | 9GB | ~32 GB | Multi-GPU setup (e.g., NVIDIA RTX 4090 x2) |
DeepSeek-R1-Distill-Qwen-32B | 32B | 20GB | ~74 GB | Multi-GPU setup (e.g., NVIDIA RTX 4090 x4) |
DeepSeek-R1-Distill-Llama-70B | 70B | 43GB | ~161 GB | Multi-GPU setup (e.g., NVIDIA A100 80GB x2) |
DeepSeek-R1 | 671B | 404GB | ~1,342 GB | Multi-GPU setup (e.g., NVIDIA A100 80GB x16) |
It is better if you have more RAM. In fact, we recommend you add powerful RAM if possible to get better results.
Pro Tip: Just starting out and confused about which R1 model to install. we recommend you try installing the smallest 1.5B parameters model (the first one in the table above) —it’s lightweight and easy to test.
How to Install DeepSeek R1 Locally
There are different ways to install and run the DeepSeek models locally on your computer. We will share a few easy ones here.
Pro Tip: We recommend Ollama and Chatbox methods if you are just starting out and want an easy way to install DeepSeek R1 model or any AI model for that matter.
Method 1: Installing R1 Using Ollama and Chatbox
This is the easiest way to get started, even for beginners.
Step 1: Install Ollama
1. Go to the Ollama website and download the installer for your operating system (Mac, Windows, or Linux). Run the installer and follow the on-screen instructions.
2. Once installed, open Terminal to confirm it’s working. Copy-paste the command below.
ollama --version

You should see a version number appear, which means Ollama is ready to use.
Step 2: Download and Run the DeepSeek R1 Model
1. Give the following command in Terminal. Replace [model size] with the model of the AI you want to install. For example, for 1.5B parameter model, run this command: ollama run deepseek-r1:1.5b.
ollama run deepseek-r1:[model size]
2. Wait for the model to download. You’ll see progress in the terminal.

3. Once downloaded, the model will start running. You can interact with it directly from the Terminal. Moving forward, you can use the same Terminal command to chat with DeepSeek R1 AI model.
Now, we will show how to install Chatbox for a user-friendly interface.
Step 3: Install Chatbox
1. Download Chatbox from its official website. Install and open the app. You’ll see a simple, user-friendly interface.
2. In Chatbox, go to Settings by clicking on the cog icon in the sidebar.

3. Set the Model Provider to Ollama.
4. Set the API host to:
http://127.0.0.1:11434
5. Select the DeepSeek R1 model (e.g., deepseek-r1:1.5b
) from the dropdown menu.

6. Hit Save and start chatting.

Method 2: Using Ollama and Docker
This method is great if you want to run the model in a Docker container.
Step 1: Install Docker
1. Go to the Docker website and download Docker Desktop for your OS. Install Docker by following the on-screen instructions.
2. Open the app and log in with the service.
3. Type the command below in the Terminal to run it.
docker --version
You should see a version number appear meaning that the Docker is installed.

Step 2: Pull the Open WebUI Image
1. Open your terminal and type:
docker pull ghcr.io/open-webui/open-webui:main
2. This will download the necessary files for the interface.

Step 3: Run the Docker Container and Open WebUI
1. Start the Docker container with persistent data storage and mapped ports by running:
docker run -d -p 9783:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
2. Wait a few seconds for the container to start.
3. Open your browser and go to:
http://localhost:9783/
4. Create an account as prompted, and you’ll be redirected to the main interface. At this point, there will be no models available for selection.

Step 5: Set Up Ollama and Integrate DeepSeek R1
1. Visit the Ollama website and download/install it.
2. In the terminal, download the desired DeepSeek R1 model by typing:
ollama run deepseek-r1:1.5b

3. Refresh the Open WebUI page in your browser. You’ll see the downloaded DeepSeek R1 model (e.g., deepseek-r1:8b
) in the model list.

4. Select the model and start chatting.

Method 3: Using LM Studio
Works great if you don’t want to use the Terminal to interact with DeepSeek locally. However, LM Studio currently only supports Qwen 7B and 8B models. So if you want to install 1.5B or even higher models like 32B, this method will not work for you.
1. Download LM Studio from its official website. Install and launch the application.
2. Click on the search icon in the sidebar and search for the DeepSeek R1 model (e.g., deepseek-r1-distill-llama-8b).

3. Click Download and wait for the process to complete.

4. Once downloaded, click on the search bar at the top of the LM Studio homepage.

5. Select the downloaded model and load it.

6. That’s it. Type your prompt in the text box and hit Enter. The model will generate a response.
Final Thoughts
Running DeepSeek R1 locally offers privacy, cost savings, and the flexibility to customize your AI setup.
If you’re new to this, start with Ollama and Chatbox for a simple setup. Docker is ideal for users familiar with containerization, while LM Studio works best for those avoiding terminal commands. Try a smaller model like the 8B or 1.5B to get started, and scale up as you go.