-
Notifications
You must be signed in to change notification settings - Fork 1.4k
MPS and XPU support #1075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
MPS and XPU support #1075
Conversation
don't know how or to/if need to change these lines or not 67 self.flow.decoder.estimator_engine = trt.Runtime(trt.Logger(trt.Logger.INFO)).deserialize_cuda_engine(f.read()) 103 self.flow.decoder.estimator_engine = trt.Runtime(trt.Logger(trt.Logger.INFO)).deserialize_cuda_engine(f.read()) this line certainly needs to be changed but idk to what 67/336 self.llm_context = torch.cuda.stream(torch.cuda.Stream(self.device)) if torch.cuda.is_available() else nullcontext()
cosyvoice/cli/model.py 两个 mps 的地方少了引号
|
Thank you! Now type checked & linted. Needs some testing. 我没有安装这个,因为我不知道如何修复这两条线。如果你想和我一起测试,我可以进一步提供帮助。
|
非常感谢,可以在m4pro上正常运行,希望能维护下去 |
在XPU和CUDA测试后,我希望它能够合并,因为这对所有团队来说都会减少麻烦...... Thank you for CosyVoice2. I have updated support for all cards and corrected seeding routines. Please test before merging. |
I have A770, I'd like to use your commits to enable xpu. but what packages do I need to install to work with your codes? Just torch xpu nightly? is there a requirements.txt ? or an install guide? |
Great! You should ensure you have this and then simply :
inside of your environment. Thats it! If you have trouble make an issue request here and I'll do my best to patch it for you. Please do not make an issue for this in the FunAudioLLM repo. It's not their problem, they have yet to accept my help. Unfortunately there aren't any formal instructions because there are not any formal instructions in the original CUDA installation. It is safe to presume the code authors consider it out of scope. There isn't a requirements.txt because setting up requirements for pip and GPUs currently does not work very well. I'll work on including a |
I don't see issue tab on your repo. I just finished install. try to run |
sorry about that, expect progress on this this to continue here : https://github.com/exdysa/CosyVoice/issues try https://github.com/eighteen-k-gold-malow/CosyVoice-XPU/commits/main/ for the time being, i will look at working with them on this |
RE: #1011
It is a few simple lines that allow MPS and XPU to pass through gatekeeping, enabling GPU other than NVIDIA's to run.
Caches, device setting, and GPU
assert
statements have been changed, though possible I overlooked something.Note: I am unsure of the lines in
cosyvoice/cli/model.py
and whether they need to be changed, this one because I am not familiar with tensorRT:self.flow.decoder.estimator_engine = trt.Runtime(trt.Logger(trt.Logger.INFO)).deserialize_cuda_engine(f.read())
And this will assign
nullcontext()
, but I know no alternative:67/336
self.llm_context = torch.cuda.stream(torch.cuda.Stream(self.device)) if torch.cuda.is_available() else nullcontext()