You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
[FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
In order to avoid using a large model to generate QA pairs, which is too slow, I modified the following code and standardized the file header. However, I did not use LLMS during QA preview. Why did I still use a large model to generate QA during encoding and the final result
def generate_qa_document(cls, tenant_id: str, query, document_language: str):
prompt = GENERATOR_QA_PROMPT.format(language=document_language)
#首先使用正则表达式提取原始问答
import re
match = re.search(r'questions:\s*(.?)\s;\sanswers:\s(.*)', query, re.DOTALL)
if match:
one_question = match.group(1).strip()
one_answer = match.group(2).strip()
answer = 'Q1:'+one_question+'\n'+'A1:'+one_answer
else:
model_manager = ModelManager()
model_instance = model_manager.get_default_model_instance(
tenant_id=tenant_id,
model_type=ModelType.LLM,
)
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
In order to avoid using a large model to generate QA pairs, which is too slow, I modified the following code and standardized the file header. However, I did not use LLMS during QA preview. Why did I still use a large model to generate QA during encoding and the final result
def generate_qa_document(cls, tenant_id: str, query, document_language: str):
prompt = GENERATOR_QA_PROMPT.format(language=document_language)
#首先使用正则表达式提取原始问答
import re
match = re.search(r'questions:\s*(.?)\s;\sanswers:\s(.*)', query, re.DOTALL)
if match:
one_question = match.group(1).strip()
one_answer = match.group(2).strip()
answer = 'Q1:'+one_question+'\n'+'A1:'+one_answer
else:
model_manager = ModelManager()
model_instance = model_manager.get_default_model_instance(
tenant_id=tenant_id,
model_type=ModelType.LLM,
)
2. Additional context or comments
No response
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: