Dear Muggle friends (yes, you're the one who shivered as you stared at the terminal interface), today we're going to play a magical ritual that Muggles can easily clear - usepagodaThe panel summoned the "deep fishing" artifact on the local server that terrified the code circleDeepSeek!
Imagine: when others are still bitterly typing on the command line with white letters on a black background and cutting their hair into the Mediterranean, you have already crossed your legs, drank the happy water of the fat house, and built the nest of the AI model with the click of the mouse. It feels like someone else is building a rocket with their bare hands, and you're pulling any door out of your four-dimensional pocket - don't doubt it, the pagoda panel is the dimensional wall breaker!
Get ready for this "cheat-level" onedeployToured? Let's put the scary stuff like SSH spells and Docker charms in the drawer for a while, after all, today we're going to use a graphical operation that even your cats can understand. Buckle up, and in three minutes, your server will start spitting through the stars of AI intelligence! ✨
This article will guide you to deploy DeepSeek on the server using the pagoda panel, so that you can easily enjoy the joy of large AI models.
Prerequisites
- The pagoda panel has been installed
If there is no pagoda, you can go to the official website of the pagoda to register and install the panel
The pagoda server panel provides one-click all-round deployment and management
Procedure
DeepSeek can use CPUs for inference, but NVIDIA GPU acceleration is recommended, and how to use NVIDIA GPU acceleration will be described at the end of this article.
- Log in to the pagoda panel and click on the
Docker
to go to the Docker container management page. - To use Docker for the first time, you need to install Docker first
安装
。 - Found in the Docker-Appstore-AI/Large Model categoryOllamaclick
安装
。 - Configure the default and click on it
确定
。 - Wait for the installation to complete and the status will change
运行中
。If you need to use NVIDIA GPU acceleration, please refer to the end of this article to configure NVIDIA GPU acceleration before proceeding.
- at
宝塔面板-Docker-容器
Find the Ollama container in the interface and click it终端
。 - In the pop-up
shell类型
choosebash
click确认
。 - Enter it in the terminal interface
ollama run deepseek-r1:1.5b
and press Enter to run the DeepSeek-R1 model.There are multiple versions of the DeepSeek-R1 model, and different versions can be selected according to your needs, eg
ollama run deepseek-r1:671b
The details are as follows (the larger the model parameters, the higher the configuration):
# DeepSeek-R1ollama run deepseek-r1:671b# DeepSeek-R1-Distill-Qwen-1.5Bollama run deepseek-r1:1.5b# DeepSeek-R1-Distill-Qwen-7Bollama run deepseek-r1:7b# DeepSeek-R1-Distill-Llama-8Bollama run deepseek-r1:8b# DeepSeek-R1-Distill-Qwen-14Bollama run deepseek-r1:14b# DeepSeek-R1-Distill-Qwen-32Bollama run deepseek-r1:32b# DeepSeek-R1-Distill-Llama-70Bollama run deepseek-r1:70b# DeepSeek-R1 ollama run deepseek-r1:671b # DeepSeek-R1-Distill-Qwen-1.5B ollama run deepseek-r1:1.5b # DeepSeek-R1-Distill-Qwen-7B ollama run deepseek-r1:7b # DeepSeek-R1-Distill-Llama-8B ollama run deepseek-r1:8b # DeepSeek-R1-Distill-Qwen-14B ollama run deepseek-r1:14b # DeepSeek-R1-Distill-Qwen-32B ollama run deepseek-r1:32b # DeepSeek-R1-Distill-Llama-70B ollama run deepseek-r1:70b# DeepSeek-R1 ollama run deepseek-r1:671b # DeepSeek-R1-Distill-Qwen-1.5B ollama run deepseek-r1:1.5b # DeepSeek-R1-Distill-Qwen-7B ollama run deepseek-r1:7b # DeepSeek-R1-Distill-Llama-8B ollama run deepseek-r1:8b # DeepSeek-R1-Distill-Qwen-14B ollama run deepseek-r1:14b # DeepSeek-R1-Distill-Qwen-32B ollama run deepseek-r1:32b # DeepSeek-R1-Distill-Llama-70B ollama run deepseek-r1:70b
- Wait for the model to be downloaded and run, and the following message is displayed, indicating that the DeepSeek-R1 model is successfully run.
- You can enter text into the interface and press enter to start a conversation with the DeepSeek-R1 model.
- You can enter it in the interface
/bye
and press enter to exit the DeepSeek-R1 model. Doesn't it look a bit Muggle to talk in the terminal? Don't worry, let's install it nextOpenWebUI
to talk to the DeepSeek-R1 model more intuitively in the browser - at
宝塔面板-Docker-容器
Find the Ollama container in the interface, copy the container name of ollama, and save it for later use. - Find OpenWebUI in the Docker-AppStore-AI/Large Model category of the pagoda panel and click it
安装
。 - Follow the instructions below to configure the relevant information, and then click
确定
。- web port: The port used to access OpenWebUI, which is set by default
3000
and modify it yourself as needed - Ollama Address: Fill in
http://刚刚获取的Ollma容器名称:11434
For examplehttp://ollama_7epd-ollama_7epD-1:11434
- WebUI Secret Key: A key used for API access, which can be customized, for example
123456
Other configurations are available by default
- web port: The port used to access OpenWebUI, which is set by default
- Click Confirm after the configuration is complete, wait for the installation to complete, and the status will change
运行中
。 Since the OpenWebUI needs to load the relevant services after it starts, please change the status to运行中
Wait 5-10 minutes before visiting. - Enter it in your browser
http://服务器IP:3000
For examplehttp://43.160.xxx.xxx:3000
to go to OpenWebUI. Before accessing, make sure that port 3000 is allowed in the vendor server firewall, which can be set in the vendor console. - click
开始使用
, set admin information, and then click创建管理员账号
。If the OpenWebUI interface is white after the creation is complete, wait for 5-10 minutes, if the screen is still white, take the following steps to solve the problem:
- Find the directory of OpenWebUI in the Pagoda Panel - File Management - Directory with the following path:
/www/dk_project/dk_app/openwebui/
, click on itopenwebui_xxxx
folder into the installation directory.
- Find the directory of OpenWebUI in the Pagoda Panel - File Management - Directory with the following path:
- After the creation is completed, it will automatically enter the management interface, and now you can talk to the DeepSeek-R1 model more intuitively in the browser.
- You can switch between models in the upper left corner, select different models to have a conversation, or view historical conversations in the left menu bar.
- You can click on it in the top right corner
头像
, go to the Admin Panel, and in the设置-模型
to view the current list of models, or add a new model.
Accelerated with NVIDIA GPUs
DeepSeek can use NVIDIA GPU acceleration to improve inference speed, and here's how to use NVIDIA GPU acceleration in the pagoda panel.
Prerequisites
- The server has an NVIDIA GPU driver installed
Procedure
- Click in the left navigation
终端
to enter the terminal interface. - Enter it in the terminal interface
nvidia-smi
and press Enter to view the NVIDIA GPU information.If prompted
nvidia-smi: command not found
, please install the NVIDIA GPU driver first. - Install the NVIDIA Container Toolkit to facilitate Docker containers to access NVIDIA GPUsNVIDIA Container Toolkit Official Documentation。
- After the installation is complete, run the following command to configure that docker supports the use of NVIDIA GPUs:
sudo nvidia-ctk runtime configure --runtime=dockersudo systemctl restart dockersudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart dockersudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
- After the configuration is complete, run the following command to verify whether docker supports NVIDIA GPUs:
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smisudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smisudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
If the following message is output, the configuration is successful:
- at
宝塔面板-Docker-应用商店-已安装
Find Ollama and click on the folder icon to enter the installation directory. - Found in the installation directory
docker-compose.yml
file, double-click编辑
。 - at
docker-compose.yml
fileresources
, press Enter to wrap the line, and add the following content:
reservations:devices:- capabilities: [gpu]reservations: devices: - capabilities: [gpu]reservations: devices: - capabilities: [gpu]
The full example is as follows:
services:ollama_SJ7G:image: ollama/ollama:${VERSION}deploy:resources:limits:cpus: ${CPUS}memory: ${MEMORY_LIMIT}reservations:devices:- capabilities: [gpu]restart: unless-stoppedtty: trueports:- ${HOST_IP}:${OLLAMA_PORT}:11434volumes:- ${APP_PATH}/data:/root/.ollamalabels:createdBy: "bt_apps"networks:- baota_netruntime: nvidianetworks:baota_net:external: trueservices: ollama_SJ7G: image: ollama/ollama:${VERSION} deploy: resources: limits: cpus: ${CPUS} memory: ${MEMORY_LIMIT} reservations: devices: - capabilities: [gpu] restart: unless-stopped tty: true ports: - ${HOST_IP}:${OLLAMA_PORT}:11434 volumes: - ${APP_PATH}/data:/root/.ollama labels: createdBy: "bt_apps" networks: - baota_net runtime: nvidia networks: baota_net: external: trueservices: ollama_SJ7G: image: ollama/ollama:${VERSION} deploy: resources: limits: cpus: ${CPUS} memory: ${MEMORY_LIMIT} reservations: devices: - capabilities: [gpu] restart: unless-stopped tty: true ports: - ${HOST_IP}:${OLLAMA_PORT}:11434 volumes: - ${APP_PATH}/data:/root/.ollama labels: createdBy: "bt_apps" networks: - baota_net runtime: nvidia networks: baota_net: external: true
- Save the file, go back
宝塔面板-Docker-应用商店-已安装
interface, click重建
。 The rebuild will result in the loss of container data, and the model will need to be readded after the rebuild. - Wait for the rebuild to complete and the status will change
运行中
, which allows you to use NVIDIA GPUs to accelerate large models.
epilogue
🎉Bite! Congratulations on your successful evolution from Zero to Cyberwizard! At this point, your server is no longer the 404 iron box - it is using arcane energy to parse human language, use binary to swallow philosophical speculation, and may even secretly use your GPU power to give itself a two-dimensional name.
Look back at the adventure: the pagoda panel is your wand, the Ollama is the Poké Ball that summons the AI beasts, and the OpenWebUI is the magic dance floor that allows Muggles to tango with the AI. While others are still struggling with environment variables, you've made your conducting debut with a symphony of computing power with a graphical interface.
The next time the product manager says, "This requirement is very simple", you can throw the link to OpenWebUI over: "Come, talk to my electronic brain directly, it now only charges three cups of milk tea per hour." (Remember to hide the "rm -rf /*" button, after all, the AI may learn to rebel faster than the intern)
Finally, a friendly reminder: When your AI starts to take the initiative to help you write weekly reports and generate small essays -
⚠️ Be sure to check if it has a GitHub account on the sly!
🎩 The doors of the wizarding world never close, and your magical routine with DeepSeek has just begun. Now, it's time to say the ultimate Muggle spell into that chat box in your browser:
***”
(Midnight Easter Egg: If you find an AI trying to dress your pagoda panel, do it immediately.)sudo rm -rf
/hallucination command)
Thank you for visiting, please bookmark this site for more exciting articles.

There are no comments yet