Use the pagoda panel to easily build your own DeepSeek!

Use the pagoda panel to easily build your own DeepSeek! - Messenger Cloud
Use the pagoda panel to easily build your own DeepSeek!
This content is free to read, so please log in to view it
0
Free to read
------ the content of the text is displayed, and you start to learn new knowledge------

Dear Muggle friends (yes, you're the one who shivered as you stared at the terminal interface), today we're going to play a magical ritual that Muggles can easily clear - usepagodaThe panel summoned the "deep fishing" artifact on the local server that terrified the code circleDeepSeek
Imagine: when others are still bitterly typing on the command line with white letters on a black background and cutting their hair into the Mediterranean, you have already crossed your legs, drank the happy water of the fat house, and built the nest of the AI model with the click of the mouse. It feels like someone else is building a rocket with their bare hands, and you're pulling any door out of your four-dimensional pocket - don't doubt it, the pagoda panel is the dimensional wall breaker!
Get ready for this "cheat-level" onedeployToured? Let's put the scary stuff like SSH spells and Docker charms in the drawer for a while, after all, today we're going to use a graphical operation that even your cats can understand. Buckle up, and in three minutes, your server will start spitting through the stars of AI intelligence! ✨

This article will guide you to deploy DeepSeek on the server using the pagoda panel, so that you can easily enjoy the joy of large AI models.

Prerequisites

  • The pagoda panel has been installed

If there is no pagoda, you can go to the official website of the pagoda to register and install the panel

The pagoda server panel provides one-click all-round deployment and management

Procedure

DeepSeek can use CPUs for inference, but NVIDIA GPU acceleration is recommended, and how to use NVIDIA GPU acceleration will be described at the end of this article.

  1. Log in to the pagoda panel and click on the Dockerto go to the Docker container management page.
    Docker
  2. To use Docker for the first time, you need to install Docker first安装
    Install Docker
  3. Found in the Docker-Appstore-AI/Large Model categoryOllamaclick安装
    Install Ollama
  4. Configure the default and click on it确定
    Configure Ollama
  5. Wait for the installation to complete and the status will change运行中
    RunningIf you need to use NVIDIA GPU acceleration, please refer to the end of this article to configure NVIDIA GPU acceleration before proceeding.
  6. at宝塔面板-Docker-容器Find the Ollama container in the interface and click it终端
    terminal
  7. In the pop-upshell类型choosebashclick确认
    Select bash
  8. Enter it in the terminal interfaceollama run deepseek-r1:1.5band press Enter to run the DeepSeek-R1 model.
    Run DeepSeek-R1There are multiple versions of the DeepSeek-R1 model, and different versions can be selected according to your needs, egollama run deepseek-r1:671bThe details are as follows (the larger the model parameters, the higher the configuration):
# DeepSeek-R1
ollama run deepseek-r1:671b
# DeepSeek-R1-Distill-Qwen-1.5B
ollama run deepseek-r1:1.5b
# DeepSeek-R1-Distill-Qwen-7B
ollama run deepseek-r1:7b
# DeepSeek-R1-Distill-Llama-8B
ollama run deepseek-r1:8b
# DeepSeek-R1-Distill-Qwen-14B
ollama run deepseek-r1:14b
# DeepSeek-R1-Distill-Qwen-32B
ollama run deepseek-r1:32b
# DeepSeek-R1-Distill-Llama-70B
ollama run deepseek-r1:70b
# DeepSeek-R1
ollama run deepseek-r1:671b
# DeepSeek-R1-Distill-Qwen-1.5B
ollama run deepseek-r1:1.5b
# DeepSeek-R1-Distill-Qwen-7B
ollama run deepseek-r1:7b
# DeepSeek-R1-Distill-Llama-8B
ollama run deepseek-r1:8b
# DeepSeek-R1-Distill-Qwen-14B
ollama run deepseek-r1:14b
# DeepSeek-R1-Distill-Qwen-32B
ollama run deepseek-r1:32b
# DeepSeek-R1-Distill-Llama-70B
ollama run deepseek-r1:70b
# DeepSeek-R1 ollama run deepseek-r1:671b # DeepSeek-R1-Distill-Qwen-1.5B ollama run deepseek-r1:1.5b # DeepSeek-R1-Distill-Qwen-7B ollama run deepseek-r1:7b # DeepSeek-R1-Distill-Llama-8B ollama run deepseek-r1:8b # DeepSeek-R1-Distill-Qwen-14B ollama run deepseek-r1:14b # DeepSeek-R1-Distill-Qwen-32B ollama run deepseek-r1:32b # DeepSeek-R1-Distill-Llama-70B ollama run deepseek-r1:70b
  1. Wait for the model to be downloaded and run, and the following message is displayed, indicating that the DeepSeek-R1 model is successfully run.
    The run was successful
  2. You can enter text into the interface and press enter to start a conversation with the DeepSeek-R1 model.
    dialogue
  3. You can enter it in the interface/byeand press enter to exit the DeepSeek-R1 model. Doesn't it look a bit Muggle to talk in the terminal? Don't worry, let's install it nextOpenWebUIto talk to the DeepSeek-R1 model more intuitively in the browser
  4. at宝塔面板-Docker-容器Find the Ollama container in the interface, copy the container name of ollama, and save it for later use.
    Container IP address
  5. Find OpenWebUI in the Docker-AppStore-AI/Large Model category of the pagoda panel and click it安装
    Install OpenWebUI
  6. Follow the instructions below to configure the relevant information, and then click 确定
    • web port: The port used to access OpenWebUI, which is set by default3000and modify it yourself as needed
    • Ollama Address: Fill inhttp://刚刚获取的Ollma容器名称:11434For examplehttp://ollama_7epd-ollama_7epD-1:11434
    • WebUI Secret Key: A key used for API access, which can be customized, for example123456
      Other configurations are available by default
    Configure OpenWebUI
  7. Click Confirm after the configuration is complete, wait for the installation to complete, and the status will change运行中。 Since the OpenWebUI needs to load the relevant services after it starts, please change the status to 运行中Wait 5-10 minutes before visiting.
  8. Enter it in your browserhttp://服务器IP:3000For examplehttp://43.160.xxx.xxx:3000to go to OpenWebUI. Before accessing, make sure that port 3000 is allowed in the vendor server firewall, which can be set in the vendor console.OpenWebUI
  9. click开始使用, set admin information, and then click创建管理员账号
    Create an administrator accountIf the OpenWebUI interface is white after the creation is complete, wait for 5-10 minutes, if the screen is still white, take the following steps to solve the problem:
    1. Find the directory of OpenWebUI in the Pagoda Panel - File Management - Directory with the following path:/www/dk_project/dk_app/openwebui/, click on itopenwebui_xxxxfolder into the installation directory.
  10. After the creation is completed, it will automatically enter the management interface, and now you can talk to the DeepSeek-R1 model more intuitively in the browser.
    dialogue
    dialogue
    1. You can switch between models in the upper left corner, select different models to have a conversation, or view historical conversations in the left menu bar.
    2. You can click on it in the top right corner头像, go to the Admin Panel, and in the 设置-模型to view the current list of models, or add a new model.

Accelerated with NVIDIA GPUs

DeepSeek can use NVIDIA GPU acceleration to improve inference speed, and here's how to use NVIDIA GPU acceleration in the pagoda panel.

Prerequisites

  • The server has an NVIDIA GPU driver installed

Procedure

  1. Click in the left navigation终端to enter the terminal interface.
    terminal
  2. Enter it in the terminal interfacenvidia-smiand press Enter to view the NVIDIA GPU information.
    nvidia-smiIf promptednvidia-smi: command not found, please install the NVIDIA GPU driver first.
  3. Install the NVIDIA Container Toolkit to facilitate Docker containers to access NVIDIA GPUsNVIDIA Container Toolkit Official Documentation
  4. After the installation is complete, run the following command to configure that docker supports the use of NVIDIA GPUs:
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
  1. After the configuration is complete, run the following command to verify whether docker supports NVIDIA GPUs:
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi


If the following message is output, the configuration is successful:
nvidia-smi

  1. at宝塔面板-Docker-应用商店-已安装Find Ollama and click on the folder icon to enter the installation directory.
    Installation directory
  2. Found in the installation directorydocker-compose.ymlfile, double-click编辑
    Edit docker-compose.yml
  3. atdocker-compose.ymlfileresources, press Enter to wrap the line, and add the following content:
reservations:
devices:
- capabilities: [gpu]
reservations:
          devices:
            - capabilities: [gpu]
reservations: devices: - capabilities: [gpu]


The full example is as follows:

services:
ollama_SJ7G:
image: ollama/ollama:${VERSION}
deploy:
resources:
limits:
cpus: ${CPUS}
memory: ${MEMORY_LIMIT}
reservations:
devices:
- capabilities: [gpu]
restart: unless-stopped
tty: true
ports:
- ${HOST_IP}:${OLLAMA_PORT}:11434
volumes:
- ${APP_PATH}/data:/root/.ollama
labels:
createdBy: "bt_apps"
networks:
- baota_net
runtime: nvidia
networks:
baota_net:
external: true
services:
  ollama_SJ7G:
    image: ollama/ollama:${VERSION}
    deploy:
      resources:
        limits:
          cpus: ${CPUS}
          memory: ${MEMORY_LIMIT}
        reservations:
          devices:
            - capabilities: [gpu]
    restart: unless-stopped
    tty: true
    ports:
      - ${HOST_IP}:${OLLAMA_PORT}:11434
    volumes:
      - ${APP_PATH}/data:/root/.ollama
    labels:
      createdBy: "bt_apps"
    networks:
      - baota_net
    runtime: nvidia

networks:
  baota_net:
    external: true
services: ollama_SJ7G: image: ollama/ollama:${VERSION} deploy: resources: limits: cpus: ${CPUS} memory: ${MEMORY_LIMIT} reservations: devices: - capabilities: [gpu] restart: unless-stopped tty: true ports: - ${HOST_IP}:${OLLAMA_PORT}:11434 volumes: - ${APP_PATH}/data:/root/.ollama labels: createdBy: "bt_apps" networks: - baota_net runtime: nvidia networks: baota_net: external: true
  1. Save the file, go back宝塔面板-Docker-应用商店-已安装interface, click重建。 The rebuild will result in the loss of container data, and the model will need to be readded after the rebuild.reconstruction
  2. Wait for the rebuild to complete and the status will change运行中, which allows you to use NVIDIA GPUs to accelerate large models.

epilogue

🎉Bite! Congratulations on your successful evolution from Zero to Cyberwizard! At this point, your server is no longer the 404 iron box - it is using arcane energy to parse human language, use binary to swallow philosophical speculation, and may even secretly use your GPU power to give itself a two-dimensional name.

Look back at the adventure: the pagoda panel is your wand, the Ollama is the Poké Ball that summons the AI beasts, and the OpenWebUI is the magic dance floor that allows Muggles to tango with the AI. While others are still struggling with environment variables, you've made your conducting debut with a symphony of computing power with a graphical interface.

The next time the product manager says, "This requirement is very simple", you can throw the link to OpenWebUI over: "Come, talk to my electronic brain directly, it now only charges three cups of milk tea per hour." (Remember to hide the "rm -rf /*" button, after all, the AI may learn to rebel faster than the intern)

Finally, a friendly reminder: When your AI starts to take the initiative to help you write weekly reports and generate small essays -
⚠️ Be sure to check if it has a GitHub account on the sly!
🎩 The doors of the wizarding world never close, and your magical routine with DeepSeek has just begun. Now, it's time to say the ultimate Muggle spell into that chat box in your browser:
***”Ctrl + D"*** (It's not quitting!) It's Detonate the intelligence bomb! 💥)

(Midnight Easter Egg: If you find an AI trying to dress your pagoda panel, do it immediately.)sudo rm -rf /hallucination command)

------ this page has ended, please share if you like it------

Thank you for visiting, please bookmark this site for more exciting articles.

© Copyright Notice
THE END
If you like it, support it
Thumbs up607admiration shareTipping
No matter what label is thrown your way, only you can define your self.
No matter what label you are labeled upon, only you can define yourself
Be the first to grab

Please log in to post a comment

    There are no comments yet