Project:LLMQueryService POC: Difference between revisions
From MaRDI portal
No edit summary |
No edit summary |
||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This page describes how to install the proof-of-concept LLM-based query service. | This page describes how to install the proof-of-concept LLM-based query service. | ||
Try it here: http://130.73.240.230/ | |||
(only from the ZIB network or VPN) | |||
=== Using a OpenStack VM === | === Using a OpenStack VM === | ||
# Create a new instance (if you use Debian: at least 12) | # Create a new instance (if you use Debian: at least 12) | ||
## See [[Project:Docker_OpenStackVM]] | |||
# Install necessary libraries | # Install necessary libraries | ||
## apt-get update | ## apt-get update | ||
Line 11: | Line 15: | ||
# You might need ssh-port-forwarding | # You might need ssh-port-forwarding | ||
## Example: ''ssh -L 8000:127.0.0.1:8501 -i OPENSTACK_KEY_FILE.pem debian@130.73.240.230'' | ## Example: ''ssh -L 8000:127.0.0.1:8501 -i OPENSTACK_KEY_FILE.pem debian@130.73.240.230'' | ||
=== The ZIB LLM Server === | |||
# Reachable only from within the ZIB network | |||
# To see the installed models: ''curl https://SERVERNAME/api/tags | jq'' | |||
# To install a new model: ''curl https://SERVERNAME/api/pull -d '{"name": "qwen2.5:0.5b"}' | jq'' | |||
==== Updating the ZIB LLM Server ==== | |||
# SSH into it: ssh debian@IP -i key.pem | |||
# Kill currently running bot | |||
# Change to the mardi_llm_bottest directory | |||
# Pull newest version from Git | |||
# Change to webapp sub-dir | |||
# Activate virtual environment: source .venv/bin/activate | |||
# Update requirements: pip install -r requirements.txt | |||
# (Optional) Adjust config | |||
# Run bot: nohup sudo $(which python) -m streamlit run web_app.py > nohup.log & |
Latest revision as of 11:36, 18 November 2024
This page describes how to install the proof-of-concept LLM-based query service.
Try it here: http://130.73.240.230/ (only from the ZIB network or VPN)
Using a OpenStack VM
- Create a new instance (if you use Debian: at least 12)
- Install necessary libraries
- apt-get update
- apt-get install git python3-pip python3-venv
- Clone the repository and follow the rest of the manual ( https://git.zib.de/bzfconra/mardi_llm_bottest )
- Install Ollama ( https://ollama.com/download/linux )
- Check the logs: journalctl -u ollama -f
- You might need ssh-port-forwarding
- Example: ssh -L 8000:127.0.0.1:8501 -i OPENSTACK_KEY_FILE.pem debian@130.73.240.230
The ZIB LLM Server
- Reachable only from within the ZIB network
- To see the installed models: curl https://SERVERNAME/api/tags | jq
- To install a new model: curl https://SERVERNAME/api/pull -d '{"name": "qwen2.5:0.5b"}' | jq
Updating the ZIB LLM Server
- SSH into it: ssh debian@IP -i key.pem
- Kill currently running bot
- Change to the mardi_llm_bottest directory
- Pull newest version from Git
- Change to webapp sub-dir
- Activate virtual environment: source .venv/bin/activate
- Update requirements: pip install -r requirements.txt
- (Optional) Adjust config
- Run bot: nohup sudo $(which python) -m streamlit run web_app.py > nohup.log &