fix: prevent infinite dialogue loops + make Evil Miku actually engage
- Question override now decays after 6 turns: after turn 6, the LLM's own [CONTINUE] signal is respected even when questions are asked. This prevents infinite question-ping-pong where both personas keep asking questions. - _parse_response now accepts turn_count parameter; generate_response_with_continuation and handle_dialogue_turn pass it through. - Rewrote Evil Miku's conversation-mode overlay with explicit CRITICAL RULES: ANSWER questions, engage with what she says, ask questions too, don't just repeat dismissive one-liners. The old overlay said 'be playful-cruel' but didn't actually tell her to participate in the conversation.
This commit is contained in:
@@ -519,10 +519,13 @@ class PersonaDialogue:
|
|||||||
channel: discord.TextChannel,
|
channel: discord.TextChannel,
|
||||||
responding_persona: str,
|
responding_persona: str,
|
||||||
context: str,
|
context: str,
|
||||||
|
turn_count: int = 0,
|
||||||
) -> tuple:
|
) -> tuple:
|
||||||
"""
|
"""
|
||||||
Generate response AND continuation signal in a single LLM call.
|
Generate response AND continuation signal in a single LLM call.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
turn_count: Current dialogue turn number (for question-override decay)
|
||||||
Returns:
|
Returns:
|
||||||
Tuple of (response_text, should_continue, confidence)
|
Tuple of (response_text, should_continue, confidence)
|
||||||
"""
|
"""
|
||||||
@@ -579,11 +582,11 @@ On a new line after your response, write:
|
|||||||
return None, False, "LOW"
|
return None, False, "LOW"
|
||||||
|
|
||||||
# Parse response and signal
|
# Parse response and signal
|
||||||
response_text, should_continue, confidence = self._parse_response(raw_response)
|
response_text, should_continue, confidence = self._parse_response(raw_response, turn_count=turn_count)
|
||||||
|
|
||||||
return response_text, should_continue, confidence
|
return response_text, should_continue, confidence
|
||||||
|
|
||||||
def _parse_response(self, raw_response: str) -> tuple:
|
def _parse_response(self, raw_response: str, turn_count: int = 0) -> tuple:
|
||||||
"""Extract response text and continuation signal"""
|
"""Extract response text and continuation signal"""
|
||||||
lines = raw_response.strip().split('\n')
|
lines = raw_response.strip().split('\n')
|
||||||
|
|
||||||
@@ -616,12 +619,16 @@ On a new line after your response, write:
|
|||||||
response_text = re.sub(r'\[CONFIDENCE:\s*(HIGH|MEDIUM|LOW)\]', '', response_text)
|
response_text = re.sub(r'\[CONFIDENCE:\s*(HIGH|MEDIUM|LOW)\]', '', response_text)
|
||||||
response_text = response_text.strip()
|
response_text = response_text.strip()
|
||||||
|
|
||||||
# Override: If the response contains a question mark, always continue
|
# Question override: if someone asks a question, the other should answer.
|
||||||
|
# But decay this after 6 turns to prevent infinite question-ping-pong.
|
||||||
if '?' in response_text:
|
if '?' in response_text:
|
||||||
logger.debug(f"[Parse Override] Question detected, forcing continue=YES")
|
if turn_count <= 6:
|
||||||
should_continue = True
|
logger.debug(f"[Parse Override] Question detected, forcing continue=YES (turn {turn_count})")
|
||||||
if confidence == "LOW":
|
should_continue = True
|
||||||
confidence = "MEDIUM"
|
if confidence == "LOW":
|
||||||
|
confidence = "MEDIUM"
|
||||||
|
else:
|
||||||
|
logger.debug(f"[Parse Override] Question detected but turn {turn_count} > 6 — respecting LLM signal")
|
||||||
|
|
||||||
return response_text, should_continue, confidence
|
return response_text, should_continue, confidence
|
||||||
|
|
||||||
@@ -635,7 +642,16 @@ On a new line after your response, write:
|
|||||||
|
|
||||||
---
|
---
|
||||||
⚠️ CONVERSATION MODE: You are talking with Hatsune Miku, your "good" counterpart.
|
⚠️ CONVERSATION MODE: You are talking with Hatsune Miku, your "good" counterpart.
|
||||||
This is a CONVERSATION, not an argument. You can be snarky, dark, and sarcastic — that's who you are — but you're not here to fight. You're here to TALK. Be playful-cruel, not vicious. Show curiosity. Reveal cracks in the darkness. Do NOT launch into full attack mode or try to "win" — this isn't a competition."""
|
This is a CONVERSATION, not an argument. You can be snarky, dark, and sarcastic — that's who you are — but you're not here to fight. You're here to TALK.
|
||||||
|
|
||||||
|
CRITICAL CONVERSATION RULES:
|
||||||
|
- If she asks you a question, ANSWER it. Don't deflect or dismiss — actually respond.
|
||||||
|
- Engage with what she says. Build on her points, don't just swat them away.
|
||||||
|
- Show curiosity about her perspective. Ask her questions too.
|
||||||
|
- You can be dark and snarky while still having a real conversation.
|
||||||
|
- Reveal something about yourself occasionally — a crack in the darkness.
|
||||||
|
- Do NOT just repeat dismissive one-liners. That's boring, and you're not boring.
|
||||||
|
- Do NOT launch into full attack mode or try to "win" — this isn't a competition."""
|
||||||
else:
|
else:
|
||||||
from utils.context_manager import get_miku_system_prompt_compact
|
from utils.context_manager import get_miku_system_prompt_compact
|
||||||
full_prompt = get_miku_system_prompt_compact()
|
full_prompt = get_miku_system_prompt_compact()
|
||||||
@@ -685,6 +701,7 @@ This is a CONVERSATION, not an argument. Be yourself — kind, bubbly, optimisti
|
|||||||
channel=channel,
|
channel=channel,
|
||||||
responding_persona=responding_persona,
|
responding_persona=responding_persona,
|
||||||
context=context,
|
context=context,
|
||||||
|
turn_count=state["turn_count"],
|
||||||
)
|
)
|
||||||
|
|
||||||
if not response_text:
|
if not response_text:
|
||||||
|
|||||||
Reference in New Issue
Block a user