For locally deployedDeepSeek,How to install and use DeepSeek R1 Big Model on your own computer?DeepSeek R1 Local Deployment Guide
How do you make it more efficient with your answers? This is where a knowledge base comes into play. The program used in this article isOllama+Docker+Dify.
Note: First, make sure you have Git and Python installed, and that you have a "good internet connection".
I. Install Ollama and Docker and download the big model.
The methodology refers to the first and second parts of the following article: DeepSeek-R1 Local Deployment
Second, the installation of Dify (Windows as an example)
Before installing make sure that both Ollama and Docker are running, i.e. you can see the icons in the taskbar:
Open the directory of the hard disk where you want to install Dify, click the right mouse button and select "Open in Terminal":
Then execute the following commands in sequence:
git clone https://github.com/langgenius/dify.git --branch 0.15.3
cd dify/docker
cp .env.example .env
docker compose up -d
When it's done, you'll see a prompt like this one, just wait for it to finish.
The following command can be run to see if the container is running properly:
docker compose ps
In the output, you should see the following services in the leftmost column: api, db, nginx, redis, sandbox, ssrf_proxy, weaviate, web, worker
You can also open Docker's container bar and click on the red box to mark the part:
The same can be seen for the services mentioned above:
Next setup setup administrator account, open the URL: http://localhost/install, enter your email, username, password, confirm and log in:
The installation was successful! In the future you can simply visit: http://localhost directly to open the main Dify page:
If you can't open the page through the above URL (or jump to the Microsoft web page after opening), it may be that the default port 80 is occupied, and you need to change the port at this time:
Open the .env file in the \dify\docker folder with a text editor tool:
Find the two lines below:
EXPOSE_NGINX_PORT=80
EXPOSE_NGINX_SSL_PORT=443
Modified to:
EXPOSE_NGINX_PORT=8080
EXPOSE_NGINX_SSL_PORT=8443
Save the file and run the following two lines in command line mode to restart the service:
docker compose down
docker compose up -d
At this point the default port has been changed to 8080 and you can use Dify at the following URL:
Dify setup page: http://localhost:8080/install
Dify main page: http://localhost:8080
III. Setting up the model
Dify can manage various models by clicking on your username in the top right corner and selecting "Settings" in the drop down menu:
Select "Model Providers", find Ollama and click "Add Model":
Model Type Selection: LLM
The name of the model is the one that has been downloaded locally, e.g. deepseek-r1:14b.
Base URL filled in: http://host.docker.internal:11434
Then click "Save", no error message.
Go back to the main Dify page and click "Create a blank app":
Select "Chat Assistant", name it whatever you want, and click "Create":
In the Chat Assistant window, in the top right corner is the model you have installed (if you can't see it, refresh the page a bit), and in the bottom right is the chat box, so you can test it out:
IV. Creating a knowledge base
In the main page of Dify, click on the top of the "Knowledge Base", at this time, you can choose to "create a knowledge base" or "connect to an external knowledge base":
In "Create Knowledge Base", you can create an empty knowledge base or select a data source:
If it's an empty knowledge base, just take a name and OK it.
If you import data sources, there are a variety of ways to "import existing text" as an example, select the text file (a single file is not larger than 15MB), click Next:
Set the settings according to your needs, and finally click "Save and Process":
This text was then added to the knowledge base:
Select "Add" in the "Context" field at the bottom of the chat assistant window to call up the knowledge base. At this point, if you ask a question about the content in the knowledge base, the AI will give you the appropriate answer:
At this point, a simple knowledge base for large models has been built.
Dify's knowledge base is very feature-rich, for more information on how to use it, please refer to the official documentation at the bottom of the tweet.
The article covers the URL:
Dify's Github code page
https://github.com/langgenius/dify
Dify Knowledge Base Building Official Documentation:
https://docs.dify.ai/zh-hans/guides/knowledge-base/create-knowledge-and-upload-documents