Skip to content

Chat completion endpoint cannot remember its previous messages #1097

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
HuskyDanny opened this issue Jan 23, 2024 · 2 comments
Closed
1 task done

Chat completion endpoint cannot remember its previous messages #1097

HuskyDanny opened this issue Jan 23, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@HuskyDanny
Copy link

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

image
With this request, llm should answer my last question. But it says it doesn;t know. Is ChatPromptTemplate with chat completion endpoint supposed to remember the previous messages or I should manually added them in the latest user prompt?
Chat completion takes a list of messages
image
Response saying don't know previous message
image

To Reproduce

This is my code for setting up using semantic kerner which uses openai package in the end to send the request, basically it add the previous messages into the ChatPromptTemplate.

system_message = """
You are a technical supporter that help users' questions, you should give 1-2 sentences to explain the answer and code examples for the question based on only the following contexts.
You may find the contexts provide information for multiple potential answers, you can give up to 3 most relevant answers separated by numbers like 1. 2. 3.

Follow this pattern to answer the question:
Contexts:
- Document path_title1 : page_content1
&&&
- Document path_title2 : page_content2

Question: question

Answer: answer
"""

prompt_template = """
Contexts:
{{$context}}

Question: {{$input}}
"""

def preprocess_messages(messages : list):
    transformed_messages = []
    transformed_messages.append({"role": "system", "message": system_message})
    for message in messages:
      role = "assistant" if message["role"] == "bot" else message["role"]
      transformed_message = {"role": role, "message": message["content"]}
      if (role != "system"):
        transformed_messages.append(transformed_message)
    return transformed_messages

def create_chat_prompt_template_instance(kernel : Kernel, messages : list = []):
    req_settings = sk_oai.AzureChatRequestSettings(max_tokens=2000, temperature=0, extension_data={"chat_system_prompt": system_message})
    req_settings.unpack_extension_data()
    config = PromptTemplateConfig(completion=req_settings)
    template = ChatPromptTemplate(prompt_template, kernel.prompt_template_engine, config)
    if (messages and len(messages) > 0):
      processed_messages = preprocess_messages(messages)
      for message in processed_messages:
        template.add_message(message["role"], message["message"])
    function_config = SemanticFunctionConfig(config, template)
    return kernel.register_semantic_function("ChatBot", "rag_chat", function_config)

  # simplified calling
  messages = [{"content": "Q1", "role": "user"}, {"content": "A1", "role": "bot"},{"content": "Q2", "role": "user"}]
  query=messages[-1]['content']
  context['context'] = combined_documents
  context['input'] = query

  previous_messages = messages[:-1]
  chat_function = create_chat_prompt_template_instance(kernel=kernel, messages=previous_messages)

  response = await kernel.run_async(chat_function, input_context=context)

Code snippets

No response

OS

Ubuntu

Python version

python3.9

Library version

openai1.0

@HuskyDanny HuskyDanny added the bug Something isn't working label Jan 23, 2024
@HuskyDanny
Copy link
Author

It seems like different wording will bring gpt's memory back, this is confusing

image

@rattrayalex
Copy link
Collaborator

This does not look like a bug in the SDK (more like a prompt engineering question or a problem in the underlying model), so I'm going to go ahead and close this issue.
For help developing with the OpenAI API from fellow developers, I recommend the OpenAI Discord server!

@rattrayalex rattrayalex closed this as not planned Won't fix, can't repro, duplicate, stale Jan 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants