Deploy DeepSeek Locally with Ollama & Integrate with VS Code Completely Free

1️⃣ Install Ollama

First things first, download and install Ollama:

Once installed, confirm it’s working by running the following in your os’s terminal:
ollama --version

2️⃣ Download DeepSeek Models

DeepSeek comes in different sizes. Here’s how to run them using Ollama:

  • DeepSeek 1.5B ollama run deepseek-r1:1.5b
  • DeepSeek 7B ollama run deepseek-r1:7b
  • DeepSeek 8B ollama run deepseek-r1:8b

This starts an interactive session where you can chat with the model.

3️⃣ Integrate with VS Code (Continue Plugin)

Now, let’s get this working inside VS Code with Continue, a plugin for AI-assisted coding.

Install the Continue Plugin

  1. Open VS Code
  2. Go to Extensions (Ctrl + Shift + X or Cmd + Shift + X)
  3. Search for “Continue” and install it

Configure Continue to Use DeepSeek

  1. Open VS Code Settings (Ctrl + ,)
  2. Search for Continue: Model Provider
  3. Set it to Ollama
  4. Open your Continue settings (continue.config.json) and add
    { “models”: { “default”: { “provider”: “ollama”, “model”: “deepseek-ai/deepseek-coder:7b” } } }

Or you can do it with the UI

  1. Select Add Chat model
  2. In provider, select Ollama
  3. In Model, you can leave it with Autodetect. Continue will detect the model you are running with Ollama

Now, restart VS Code, open a project, and press Cmd + K (Mac) or Ctrl + K (Windows) to start chatting with DeepSeek inside Continue!


No Comments

Send Comment Edit Comment


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
Previous