2.Dialogue platform installationEnvironmental requirementsDocker BuildOfficial installa DockerInstall Docker by Alibaba CloudInstall Open WebUIRun Open WebUIAdministrator accountRegister and log inUser interfaceModel DialogueSwitching modelsDemo: Qwen2FAQClose Open WebUICommon MistakesUnable to start Open WebUIService connection timeout
Demo Environment
Development boards:Jetson nano
SD(TF) card:64G
xIt is recommended to run the 4B and below parameter models
Open WebUI is an open source project that aims to provide a simple and easy-to-use user interface (UI) for managing and monitoring open source software and services.
xWhen using Open WebUI, there is a high probability that the dialogue will become unresponsive or timeout. You can try restarting Open WebUI or using the Ollama tool to run the model!
Install Open WebUI on the host and Conda: Node.js >= 20.10, Python = 3.11:
Environment construction method | Difficulty Level |
---|---|
Host | High |
Conda | Middle |
Docker | Low |
Tutorial demonstrating how to install Open WebUI with Docker.
If Docker is not installed, you can use the script to install Docker in one click.
xThe Jetson system image we provide has Docker installed, so you don’t need to install it yourself.
xxxxxxxxxx
sudo apt update
xxxxxxxxxx
sudo apt upgrade
Download the get-docker.sh file and save it in the current directory.
xxxxxxxxxx
sudo apt install curl
xxxxxxxxxx
curl -fsSL https://get.docker.com -o get-docker.sh
Run the get-docker.sh script file with sudo privileges.
xxxxxxxxxx
sudo sh get-docker.sh
If you cannot install it yourself, please use the system by we provided.
xxxxxxxxxx
sudo apt update
xxxxxxxxxx
sudo apt install apt-transport-https ca-certificates curl gnupg2 lsb-release software-properties-common
xxxxxxxxxx
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
xxxxxxxxxx
echo "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
xxxxxxxxxx
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin
For systems that have Docker installed, you can directly enter the following command in the terminal:
The picture shows the result of the installation.
xxxxxxxxxx
sudo docker pull ghcr.io/open-webui/open-webui:main
Input following command to to start Docker:
xxxxxxxxxx
sudo docker run --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
After successful startup, enter the following URL in the browser to access.
xxxxxxxxxx
http://localhost:8080/
For the first time use, you need to register an account yourself. This account is an administrator account, and the information can be filled in as required!
xSince all the contents of our mirror have been set up and tested, users can directly log in with our registered account:
Username: admin
Email: admin@qq.com
Password: admin
Since we have already registered: admin account, just log in directly!
Using Open WebUI for conversations will be slower than using the Ollama tool directly, and may even result in a timeout service connection failure. This is related to the memory of Jetson Nano and cannot be avoided!
xYou can switch to other Linux environments to build Ollama tools and Open WebUI tools for dialogue
Click Select a model
to select the specified model dialog.
xModels pulled using Ollama will be automatically added to the Open WebUI model options, and the new model will appear when you refresh the web page.
xxxxxxxxxx
Tell me something about large language models.
Close the automatically started Open WebUI
xxxxxxxxxx
docker ps
xdocker stop [CONTAINER ID] # Eg docker stop 5f42ee9cf784
xxxxxxxxxx
docker ps -a
xdocker rm [CONTAINER ID] # Eg docker rm 5f42ee9cf784
xxxxxxxxxx
docker container prune
Solution: Close Open WebUI and restart it.
Close Open WebUI and restart it, then ask the question again or run the model with the Ollama tool to ask the question.