Add dual GPU support with web UI selector

Features:
- Built custom ROCm container for AMD RX 6800 GPU
- Added GPU selection toggle in web UI (NVIDIA/AMD)
- Unified model names across both GPUs for seamless switching
- Vision model always uses NVIDIA GPU (optimal performance)
- Text models (llama3.1, darkidol) can use either GPU
- Added /gpu-status and /gpu-select API endpoints
- Implemented GPU state persistence in memory/gpu_state.json

Technical details:
- Multi-stage Dockerfile.llamaswap-rocm with ROCm 6.2.4
- llama.cpp compiled with GGML_HIP=ON for gfx1030 (RX 6800)
- Proper GPU permissions without root (groups 187/989)
- AMD container on port 8091, NVIDIA on port 8090
- Updated bot/utils/llm.py with get_current_gpu_url() and get_vision_gpu_url()
- Modified bot/utils/image_handling.py to always use NVIDIA for vision
- Enhanced web UI with GPU selector button (blue=NVIDIA, red=AMD)

Files modified:
- docker-compose.yml (added llama-swap-amd service)
- bot/globals.py (added LLAMA_AMD_URL)
- bot/api.py (added GPU selection endpoints and helper function)
- bot/utils/llm.py (GPU routing for text models)
- bot/utils/image_handling.py (GPU routing for vision models)
- bot/static/index.html (GPU selector UI)
- llama-swap-rocm-config.yaml (unified model names)

New files:
- Dockerfile.llamaswap-rocm
- bot/memory/gpu_state.json
- bot/utils/gpu_router.py (load balancing utility)
- setup-dual-gpu.sh (setup verification script)
- DUAL_GPU_*.md (documentation files)
This commit is contained in:
2026-01-09 00:03:59 +02:00
parent ed5994ec78
commit 1fc3d74a5b
21 changed files with 2836 additions and 13 deletions

View File

@@ -233,7 +233,9 @@ async def analyze_image_with_vision(base64_img):
"""
Analyze an image using llama.cpp multimodal capabilities.
Uses OpenAI-compatible chat completions API with image_url.
Always uses NVIDIA GPU for vision model.
"""
from utils.llm import get_vision_gpu_url
payload = {
"model": globals.VISION_MODEL,
@@ -262,7 +264,8 @@ async def analyze_image_with_vision(base64_img):
async with aiohttp.ClientSession() as session:
try:
async with session.post(f"{globals.LLAMA_URL}/v1/chat/completions", json=payload, headers=headers) as response:
vision_url = get_vision_gpu_url()
async with session.post(f"{vision_url}/v1/chat/completions", json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
return data.get("choices", [{}])[0].get("message", {}).get("content", "No description.")
@@ -323,7 +326,8 @@ async def analyze_video_with_vision(video_frames, media_type="video"):
async with aiohttp.ClientSession() as session:
try:
async with session.post(f"{globals.LLAMA_URL}/v1/chat/completions", json=payload, headers=headers) as response:
vision_url = get_vision_gpu_url()
async with session.post(f"{vision_url}/v1/chat/completions", json=payload, headers=headers) as response:
if response.status == 200:
data = await response.json()
return data.get("choices", [{}])[0].get("message", {}).get("content", "No description.")