1️⃣ Install Ollama
First things first, download and install Ollama:
Once installed, confirm it’s working by running the following in your os’s terminal:ollama --version
2️⃣ Download DeepSeek Models
DeepSeek comes in different sizes. Here’s how to run them using Ollama:
- DeepSeek 1.5B
ollama run deepseek-r1:1.5b
- DeepSeek 7B
ollama run deepseek-r1:7b
- DeepSeek 8B
ollama run deepseek-r1:8b
This starts an interactive session where you can chat with the model.
3️⃣ Integrate with VS Code (Continue Plugin)
Now, let’s get this working inside VS Code with Continue, a plugin for AI-assisted coding.
Install the Continue Plugin
- Open VS Code
- Go to Extensions (
Ctrl + Shift + X
orCmd + Shift + X
) - Search for “Continue” and install it
Configure Continue to Use DeepSeek
- Open VS Code Settings (
Ctrl + ,
) - Search for Continue: Model Provider
- Set it to Ollama
- Open your Continue settings (
continue.config.json
) and add
{ “models”: { “default”: { “provider”: “ollama”, “model”: “deepseek-ai/deepseek-coder:7b” } } }
Or you can do it with the UI
- Select Add Chat model
- In provider, select Ollama
- In Model, you can leave it with Autodetect. Continue will detect the model you are running with Ollama
Now, restart VS Code, open a project, and press Cmd + K
(Mac) or Ctrl + K
(Windows) to start chatting with DeepSeek inside Continue!