Files
miku-discord/docker-compose.yml
koko210Serve 1fc3d74a5b Add dual GPU support with web UI selector
Features:
- Built custom ROCm container for AMD RX 6800 GPU
- Added GPU selection toggle in web UI (NVIDIA/AMD)
- Unified model names across both GPUs for seamless switching
- Vision model always uses NVIDIA GPU (optimal performance)
- Text models (llama3.1, darkidol) can use either GPU
- Added /gpu-status and /gpu-select API endpoints
- Implemented GPU state persistence in memory/gpu_state.json

Technical details:
- Multi-stage Dockerfile.llamaswap-rocm with ROCm 6.2.4
- llama.cpp compiled with GGML_HIP=ON for gfx1030 (RX 6800)
- Proper GPU permissions without root (groups 187/989)
- AMD container on port 8091, NVIDIA on port 8090
- Updated bot/utils/llm.py with get_current_gpu_url() and get_vision_gpu_url()
- Modified bot/utils/image_handling.py to always use NVIDIA for vision
- Enhanced web UI with GPU selector button (blue=NVIDIA, red=AMD)

Files modified:
- docker-compose.yml (added llama-swap-amd service)
- bot/globals.py (added LLAMA_AMD_URL)
- bot/api.py (added GPU selection endpoints and helper function)
- bot/utils/llm.py (GPU routing for text models)
- bot/utils/image_handling.py (GPU routing for vision models)
- bot/static/index.html (GPU selector UI)
- llama-swap-rocm-config.yaml (unified model names)

New files:
- Dockerfile.llamaswap-rocm
- bot/memory/gpu_state.json
- bot/utils/gpu_router.py (load balancing utility)
- setup-dual-gpu.sh (setup verification script)
- DUAL_GPU_*.md (documentation files)
2026-01-09 00:03:59 +02:00

96 lines
2.9 KiB
YAML

version: '3.9'
services:
llama-swap:
image: ghcr.io/mostlygeek/llama-swap:cuda
container_name: llama-swap
ports:
- "8090:8080" # Map host port 8090 to container port 8080
volumes:
- ./models:/models # GGUF model files
- ./llama-swap-config.yaml:/app/config.yaml # llama-swap configuration
runtime: nvidia
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 10s
timeout: 5s
retries: 10
start_period: 30s # Give more time for initial model loading
environment:
- NVIDIA_VISIBLE_DEVICES=all
llama-swap-amd:
build:
context: .
dockerfile: Dockerfile.llamaswap-rocm
container_name: llama-swap-amd
ports:
- "8091:8080" # Map host port 8091 to container port 8080
volumes:
- ./models:/models # GGUF model files
- ./llama-swap-rocm-config.yaml:/app/config.yaml # llama-swap configuration for AMD
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
group_add:
- "985" # video group
- "989" # render group
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 10s
timeout: 5s
retries: 10
start_period: 30s # Give more time for initial model loading
environment:
- HSA_OVERRIDE_GFX_VERSION=10.3.0 # RX 6800 compatibility
- ROCM_PATH=/opt/rocm
- HIP_VISIBLE_DEVICES=0 # Use first AMD GPU
- GPU_DEVICE_ORDINAL=0
miku-bot:
build: ./bot
container_name: miku-bot
volumes:
- ./bot/memory:/app/memory
- /home/koko210Serve/ComfyUI/output:/app/ComfyUI/output:ro
- /var/run/docker.sock:/var/run/docker.sock # Allow container management
depends_on:
llama-swap:
condition: service_healthy
llama-swap-amd:
condition: service_healthy
environment:
- DISCORD_BOT_TOKEN=MTM0ODAyMjY0Njc3NTc0NjY1MQ.GXsxML.nNCDOplmgNxKgqdgpAomFM2PViX10GjxyuV8uw
- LLAMA_URL=http://llama-swap:8080
- LLAMA_AMD_URL=http://llama-swap-amd:8080 # Secondary AMD GPU endpoint
- TEXT_MODEL=llama3.1
- VISION_MODEL=vision
- OWNER_USER_ID=209381657369772032 # Your Discord user ID for DM analysis reports
ports:
- "3939:3939"
restart: unless-stopped
anime-face-detector:
build: ./face-detector
container_name: anime-face-detector
runtime: nvidia
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
volumes:
- ./face-detector/api:/app/api
- ./face-detector/images:/app/images
ports:
- "7860:7860" # Gradio UI
- "6078:6078" # FastAPI API
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
restart: "no" # Don't auto-restart - only run on-demand
profiles:
- tools # Don't start by default