소스 검색

fix: next suggest question logic problem (#6451)

Co-authored-by: evenyan <yikun.yan@ubtrobot.com>
Even 9 달 전
부모
커밋
c013086e64
2개의 변경된 파일2개의 추가작업 그리고 1개의 파일을 삭제
  1. 1 0
      api/core/llm_generator/prompts.py
  2. 1 1
      api/core/memory/token_buffer_memory.py

+ 1 - 0
api/core/llm_generator/prompts.py

@@ -64,6 +64,7 @@ User Input:
 SUGGESTED_QUESTIONS_AFTER_ANSWER_INSTRUCTION_PROMPT = (
     "Please help me predict the three most likely questions that human would ask, "
     "and keeping each question under 20 characters.\n"
+    "MAKE SURE your output is the SAME language as the Assistant's latest response(if the main response is written in Chinese, then the language of your output must be using Chinese.)!\n"
     "The output must be an array in JSON format following the specified schema:\n"
     "[\"question1\",\"question2\",\"question3\"]\n"
 )

+ 1 - 1
api/core/memory/token_buffer_memory.py

@@ -103,7 +103,7 @@ class TokenBufferMemory:
 
         if curr_message_tokens > max_token_limit:
             pruned_memory = []
-            while curr_message_tokens > max_token_limit and prompt_messages:
+            while curr_message_tokens > max_token_limit and len(prompt_messages)>1:
                 pruned_memory.append(prompt_messages.pop(0))
                 curr_message_tokens = self.model_instance.get_llm_num_tokens(
                     prompt_messages