Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I hope you can choose the QA of the file itself for uploading #7430

Closed
4 of 5 tasks
401557122 opened this issue Aug 20, 2024 · 1 comment
Closed
4 of 5 tasks

I hope you can choose the QA of the file itself for uploading #7430

401557122 opened this issue Aug 20, 2024 · 1 comment
Labels
🙋‍♂️ question This issue does not contain proper reproduce steps or it only has limited words without details.

Comments

@401557122
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

In order to avoid using a large model to generate QA pairs, which is too slow, I modified the following code and standardized the file header. However, I did not use LLMS during QA preview. Why did I still use a large model to generate QA during encoding and the final result
def generate_qa_document(cls, tenant_id: str, query, document_language: str):
prompt = GENERATOR_QA_PROMPT.format(language=document_language)
#首先使用正则表达式提取原始问答
import re
match = re.search(r'questions:\s*(.?)\s;\sanswers:\s(.*)', query, re.DOTALL)
if match:
one_question = match.group(1).strip()
one_answer = match.group(2).strip()
answer = 'Q1:'+one_question+'\n'+'A1:'+one_answer
else:
model_manager = ModelManager()
model_instance = model_manager.get_default_model_instance(
tenant_id=tenant_id,
model_type=ModelType.LLM,
)

        prompt_messages = [
            SystemPromptMessage(content=prompt),
            UserPromptMessage(content=query)
        ]

        response = model_instance.invoke_llm(
            prompt_messages=prompt_messages,
            model_parameters={
                'temperature': 0.01,
                "max_tokens": 2000
            },
            stream=False
        )

        answer = response.message.content
    return answer.strip()

2. Additional context or comments

No response

3. Can you help us with this feature?

  • I am interested in contributing to this feature.
@dosubot dosubot bot added the 🙋‍♂️ question This issue does not contain proper reproduce steps or it only has limited words without details. label Aug 20, 2024
@crazywoola
Copy link
Member

Duplicated #6904

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🙋‍♂️ question This issue does not contain proper reproduce steps or it only has limited words without details.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants