-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUILD: No PyTorch dep #1854
BUILD: No PyTorch dep #1854
Conversation
be042d0
to
3d7affa
Compare
We should also try a "Oneshot nightly build" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey folks, we really need to think this through a bit more. The current torch_mlir python module is very fundamentally tied to PyTorch. I think if we want to do this properly, we should basically just expose the vanilla torch_mlir
python module which is basically just a wrapper around our MLIR infra. This would mostly be cmake work, such as bypassing things like
torch-mlir/python/CMakeLists.txt
Line 48 in cc819e7
declare_mlir_python_sources(TorchMLIRPythonSources.TopLevel |
We might want to go even further -- having two python modules -- one which is just the vanilla MLIR infra, and then a wrapper around those which provides the integration into PyTorch.
For context, the torch_mlir
Python module is currently the MLIR wrapper one (things like torch_mlir.ir.Module, torch_mlir.passmanager, etc.) and then we decorate some stuff inside it (all of which depends on PyTorch). We can either use cmake to eliminate all the additional stuff we put inside it, or pivot to having two separate Python modules. I think using cmake here is probably the right choice. The flag would be called TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS
which implies TORCH_MLIR_ENABLE_JIT_IR_IMPORTER=NO
and DTORCH_MLIR_ENABLE_LTC=NO
and some other stuff.
e2ab670
to
473c33e
Compare
I like having two. python modules / whl packages. This way we can actually pass auditwheel and publish to pypi |
38db17f
to
2a74d29
Compare
I'm not 100% sure what it means to "pass auditwheel" but after building the wheel with (venv) $ auditwheel repair --plat manylinux_2_35_x86_64 torch_mlir-0.0.1-cp310-cp310-linux_x86_64.whl
INFO:auditwheel.main_repair:Repairing torch_mlir-0.0.1-cp310-cp310-linux_x86_64.whl
INFO:auditwheel.wheeltools:Previous filename tags: linux_x86_64
INFO:auditwheel.wheeltools:New filename tags: manylinux_2_35_x86_64
INFO:auditwheel.wheeltools:Previous WHEEL info tags: cp310-cp310-linux_x86_64
INFO:auditwheel.wheeltools:New WHEEL info tags: cp310-cp310-manylinux_2_35_x86_64
INFO:auditwheel.main_repair:
Fixed-up wheel written to /home/mlevental/dev_projects/torch-mlir/wheelhouse/wheelhouse/torch_mlir-0.0.1-cp310-cp310-manylinux_2_35_x86_64.whl |
4cc280a
to
40135f1
Compare
I believe this looks reasonable (but I've been wrong before 😅). Adjusting the actual build scripts should be a separate PR as they are ...dense... |
40135f1
to
8c77bc6
Compare
Do we want to frontload splitting the modules? Or do that later? This seems reasonable to me if it works. @powderluv ? |
I believe this meets all of our current needs completely so I'm in support of deferring the split until there's a particular need. |
Agreed. This is good to land and iterate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change commit message to remove WIP
Also FYI for @ashay since he is cleaning up the whl and pytorch requirements etc. |
Note the absense of
torch==2.0.0.dev20230205
.