We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
- paddlepaddle:3.0.0b0 - paddlepaddle-gpu: - paddlenlp: 3.0.0b0.post0
示例程序第三行报错: Exception has occurred: AttributeError module 'mmap' has no attribute 'MAP_PRIVATE'
from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B") model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16") input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd") outputs = model.generate(**input_features, max_length=128) tokenizer.batch_decode(outputs[0]) ['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
The text was updated successfully, but these errors were encountered:
非常抱歉出现这个问题,问题原因现已确定,修改PR为#8734 (暂未合入),辛苦参照修改内容进行修改。
具体原因:mmap在windows和unix上存在两种API,其中windows api不支持flag传入。
Windows: mmap(fileno, length[, tagname[, access[, offset]]]) Unix: mmap(fileno, length[, flags[, prot[, access[, offset]]]])
Sorry, something went wrong.
按#8734 修改了文件代码,示例代码仍出错。不再报mmap错误,新错误是: Exception has occurred: PermissionError [WinError 5] 拒绝访问。 model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。
This issue was closed because it has been inactive for 14 days since being marked as stale. 当前issue 被标记为stale已有14天,即将关闭。
lugimzzz
No branches or pull requests
软件环境
重复问题
错误描述
稳定复现步骤 & 代码
from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B")
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B", dtype="float16")
input_features = tokenizer("你好!请自我介绍一下。", return_tensors="pd")
outputs = model.generate(**input_features, max_length=128)
tokenizer.batch_decode(outputs[0])
['我是一个AI语言模型,我可以回答各种问题,包括但不限于:天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗?']
The text was updated successfully, but these errors were encountered: