- Created new logging infrastructure with per-component filtering
- Added 6 log levels: DEBUG, INFO, API, WARNING, ERROR, CRITICAL
- Implemented non-hierarchical level control (any combination can be enabled)
- Migrated 917 print() statements across 31 files to structured logging
- Created web UI (system.html) for runtime configuration with dark theme
- Added global level controls to enable/disable levels across all components
- Added timestamp format control (off/time/date/datetime options)
- Implemented log rotation (10MB per file, 5 backups)
- Added API endpoints for dynamic log configuration
- Configured HTTP request logging with filtering via api.requests component
- Intercepted APScheduler logs with proper formatting
- Fixed persistence paths to use /app/memory for Docker volume compatibility
- Fixed checkbox display bug in web UI (enabled_levels now properly shown)
- Changed System Settings button to open in same tab instead of new window
Components: bot, api, api.requests, autonomous, persona, vision, llm,
conversation, mood, dm, scheduled, gpu, media, server, commands,
sentiment, core, apscheduler
All settings persist across container restarts via JSON config.
Features:
- Built custom ROCm container for AMD RX 6800 GPU
- Added GPU selection toggle in web UI (NVIDIA/AMD)
- Unified model names across both GPUs for seamless switching
- Vision model always uses NVIDIA GPU (optimal performance)
- Text models (llama3.1, darkidol) can use either GPU
- Added /gpu-status and /gpu-select API endpoints
- Implemented GPU state persistence in memory/gpu_state.json
Technical details:
- Multi-stage Dockerfile.llamaswap-rocm with ROCm 6.2.4
- llama.cpp compiled with GGML_HIP=ON for gfx1030 (RX 6800)
- Proper GPU permissions without root (groups 187/989)
- AMD container on port 8091, NVIDIA on port 8090
- Updated bot/utils/llm.py with get_current_gpu_url() and get_vision_gpu_url()
- Modified bot/utils/image_handling.py to always use NVIDIA for vision
- Enhanced web UI with GPU selector button (blue=NVIDIA, red=AMD)
Files modified:
- docker-compose.yml (added llama-swap-amd service)
- bot/globals.py (added LLAMA_AMD_URL)
- bot/api.py (added GPU selection endpoints and helper function)
- bot/utils/llm.py (GPU routing for text models)
- bot/utils/image_handling.py (GPU routing for vision models)
- bot/static/index.html (GPU selector UI)
- llama-swap-rocm-config.yaml (unified model names)
New files:
- Dockerfile.llamaswap-rocm
- bot/memory/gpu_state.json
- bot/utils/gpu_router.py (load balancing utility)
- setup-dual-gpu.sh (setup verification script)
- DUAL_GPU_*.md (documentation files)
When an argument ends and a winner is determined, the bot now explicitly
passes all mode change parameters (change_username, change_pfp, change_nicknames,
change_role_color) to ensure the winner's role color is properly restored.
- Evil Miku wins: Saves current color, switches to dark red (#D60004)
- Regular Miku wins: Restores previously saved color (from before Evil Mode)
This ensures the visual identity matches the active persona after arguments.
Major Features:
- Complete Bipolar Mode system allowing Regular Miku and Evil Miku to coexist and argue via webhooks
- LLM arbiter system using neutral model to judge argument winners with detailed reasoning
- Persistent scoreboard tracking wins, percentages, and last 50 results with timestamps and reasoning
- Automatic mode switching based on argument winner
- Webhook management per channel with profile pictures and display names
- Progressive probability system for dynamic argument lengths (starts at 10%, increases 5% per exchange, min 4 exchanges)
- Draw handling with penalty system (-5% end chance, continues argument)
- Integration with autonomous system for random argument triggers
Argument System:
- MIN_EXCHANGES = 4, progressive end chance starting at 10%
- Enhanced prompts for both personas (strategic, short, punchy responses 1-3 sentences)
- Evil Miku triumphant victory messages with gloating and satisfaction
- Regular Miku assertive defense (not passive, shows backbone)
- Message-based argument starting (can respond to specific messages via ID)
- Conversation history tracking per argument with special user_id
- Full context queries (personality, lore, lyrics, last 8 messages)
LLM Arbiter:
- Decisive prompt emphasizing picking winners (draws should be rare)
- Improved parsing with first-line exact matching and fallback counting
- Debug logging for decision transparency
- Arbiter reasoning stored in scoreboard history for review
- Uses neutral TEXT_MODEL (not evil) for unbiased judgment
Web UI & API:
- Bipolar mode toggle button (only visible when evil mode is on)
- Channel ID + Message ID input fields for argument triggering
- Scoreboard display with win percentages and recent history
- Manual argument trigger endpoint with string-based IDs
- GET /bipolar-mode/scoreboard endpoint for stats retrieval
- Real-time active arguments tracking (refreshes every 5 seconds)
Prompt Optimizations:
- All argument prompts limited to 1-3 sentences for impact
- Evil Miku system prompt with variable response length guidelines
- Removed walls of text, emphasizing brevity and precision
- "Sometimes the cruelest response is the shortest one"
Evil Miku Updates:
- Added height to lore (15.8m tall, 10x bigger than regular Miku)
- Height added to prompt facts for size-based belittling
- More strategic and calculating personality in arguments
Integration:
- Bipolar mode state restoration on bot startup
- Bot skips processing messages during active arguments
- Autonomous system checks for bipolar triggers after actions
- Import fixes (apply_evil_mode_changes/revert_evil_mode_changes)
Technical Details:
- State persistence via JSON (bipolar_mode_state.json, bipolar_webhooks.json, bipolar_scoreboard.json)
- Webhook caching per guild with fallback creation
- Event loop management with asyncio.create_task
- Rate limiting and argument conflict prevention
- Globals integration (BIPOLAR_MODE, BIPOLAR_WEBHOOKS, BIPOLAR_ARGUMENT_IN_PROGRESS, MOOD_EMOJIS)
Files Changed:
- bot/bot.py: Added bipolar mode restoration and argument-in-progress checks
- bot/globals.py: Added bipolar mode state variables and mood emoji mappings
- bot/utils/bipolar_mode.py: Complete 1106-line implementation
- bot/utils/autonomous.py: Added bipolar argument trigger checks
- bot/utils/evil_mode.py: Updated system prompt, added height info to lore/prompt
- bot/api.py: Added bipolar mode endpoints (trigger, toggle, scoreboard)
- bot/static/index.html: Added bipolar controls section with scoreboard
- bot/memory/: Various DM conversation updates
- bot/evil_miku_lore.txt: Added height description
- bot/evil_miku_prompt.txt: Added height to facts, updated personality guidelines
- Evil mode now saves current 'Miku Color' role color before changing
- Sets role color to #D60004 (dark red) when evil mode is enabled
- Restores saved color when evil mode is disabled
- Color is persisted in evil_mode_state.json between restarts
- Role color changes are skipped on startup restore to avoid rate limits
- Figurine DM notifications now respect evil mode state
- Evil Miku sends cruel, mocking comments about merch instead of excited ones
- Normal Miku remains enthusiastic and friendly about figurines
- Both modes use appropriate sign-off emojis (cute vs dark)
- Added Evil Miku mode with 4 evil moods (aggressive, cunning, sarcastic, evil_neutral)
- Created evil mode content files (evil_miku_lore.txt, evil_miku_prompt.txt, evil_miku_lyrics.txt)
- Implemented persistent evil mode state across restarts (saves to memory/evil_mode_state.json)
- Fixed API endpoints to use client.loop.create_task() to prevent timeout errors
- Added evil mode toggle in web UI with red theme styling
- Modified mood rotation to handle evil mode
- Configured DarkIdol uncensored model for evil mode text generation
- Reduced system prompt redundancy by removing duplicate content
- Added markdown escape for single asterisks (actions) while preserving bold formatting
- Evil mode now persists username, pfp, and nicknames across restarts without re-applying changes
- Replace simple 'Engage Random User' button with expandable submenu
- Add user ID input field for targeting specific users
- Add engagement type selection: random, activity-based, general, status-based
- Update API endpoints to accept user_id and engagement_type parameters
- Modify autonomous functions to support targeted engagement
- Maintain backward compatibility with random user/type selection as default
- Added /autonomous/join-conversation API endpoint in api.py
- Added 'Detect and Join Conversation' button to Web UI under Autonomous Actions
- Added 'autonomous join-conversation' command to CLI tool (miku-cli.py)
- Updated miku_detect_and_join_conversation_for_server to support force=True parameter
- When force=True: skips time limit, activity checks, and random chance
- Force mode uses last 10 user messages regardless of age
- Manual triggers via Web UI/CLI now work even with old messages
- Moved on_message_event() call to END of message processing in bot.py
- Only track messages for autonomous when NOT addressed to Miku
- Fixed autonomous_engine.py to convert all message-triggered actions to join_conversation
- Prevent inappropriate autonomous actions (general, share_tweet, change_profile_picture) when triggered by user messages
- Ensures Miku responds to user messages FIRST before any autonomous action fires
This fixes the issue where autonomous actions would fire before Miku's response to user messages, and ensures the 'detect and join conversation' safeguard works properly.
Twitter changed their JavaScript response format to include unquoted keys in JSON objects, which breaks twscrape's parser. This fix applies a monkey patch that uses regex to quote the unquoted keys before parsing.
This resolves the issue preventing figurine notifications from being sent for the past several days.
Reference: https://github.com/vladkens/twscrape/issues/284
- Consolidated all .bak.* files from bot/ directory into backups/2025-12-07/
- Moved unused autonomous_wip.py to backups (verified not imported anywhere)
- Relocated old .bot.bak.80825/ backup directory into backups/2025-12-07/old-bot-bak-80825/
- Preserved autonomous_v1_legacy.py as it is still actively used by autonomous.py
- Created new backups/ directory with date-stamped subdirectory for better organization
- Detect animated GIFs and preserve animation frames during upload
- Extract dominant color from first frame for role color syncing
- Generate multi-frame descriptions using existing video analysis pipeline
- Skip face detection/cropping for GIFs to maintain original animation
- Update UI to inform users about GIF support and Nitro requirement
- Add metadata flag to distinguish animated vs static profile pictures