Releases: sophgo/tpu-mlir
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- Lots of bug fixes and performance improvements.
- TPU-MLIR supports importing Pytorch models (no need to convert to ONNX).
- Unified pre-processing for bm168x and cv18xx chips.
- Support for the bm1684 chip is underway.
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- Resolved pre-processing performance issues.
- Added shape inference for dynamic input shapes.
- Implemented constant folding to simplify the graph.
- Improved performance, still working on optimizations.
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- The image pre-processing will be offloaded to TPU, improving performance.
- Many bug fixes allow TPU-MLIR to support more neural networks.
- fix pool sign error in v0.8-beta.3
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- The image pre-processing will be offloaded to TPU, improving performance.
- Many bug fixes allow TPU-MLIR to support more neural networks.
- Fix pre-processing conversion bug in v0.8-beta.2
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- The image pre-processing will be offloaded to TPU, improving performance.
- Many bug fixes allow TPU-MLIR to support more neural networks.
* Fix reading pre-processing configuration bug in v0.8-beta.1
Technical Preview
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- The image pre-processing will be offloaded to TPU, improving performance.
- Many bug fixes allow TPU-MLIR to support more neural networks.
Technical Preview
This is a beta version of the TPU-MLIR. Please don't use it for a production environment.
With this version, some changes should be highlighted:
- Optimize the layer group process with much performance improvement.
- With many bug fixes, TPU-MLIR can support more Neural networks.
Welcome to TPU-MLIR. To get a start, you can:
Follow the Readme to understand how to use TPU-MLIR: https://github.com/sophgo/tpu-mlir
Read the design of TPU-MLIR: https://arxiv.org/abs/2210.15016
Understand the development plan: https://github.com/sophgo/tpu-mlir/wiki/Roadmap%5BCN%5D
Understand the project structure: https://github.com/sophgo/tpu-mlir/wiki/Tutorial%5BCN%5D
Try to solve the "good first issue" issues from https://github.com/sophgo/tpu-mlir/issues; they are relatively small and will gradually increase.
https://github.com/PaddlePaddle/FastDeploy has many PaddlePaddle models; you can get familiar with TPU-MLIR by adapting the model. (please be sure to convert the PaddlePaddle model to ONNX format first.).
For technical details, please refer to: https://tpumlir.org/docs/developer_manual/index.html.
Any questions and suggestions are welcome; everyone can exchange opinions and learn together.
Release candidate
Welcome to TPU-MLIR. To get a start, you can:
- Follow the Readme to understand how to use TPU-MLIR: https://github.com/sophgo/tpu-mlir
- Read the design of TPU-MLIR: https://arxiv.org/abs/2210.15016
- Understand the development plan: https://github.com/sophgo/tpu-mlir/wiki/Roadmap%5BCN%5D
- Understand the project structure: https://github.com/sophgo/tpu-mlir/wiki/Tutorial%5BCN%5D
- Try to solve the "good first issue" issues from https://github.com/sophgo/tpu-mlir/issues; they are relatively small and will gradually increase.
- https://github.com/PaddlePaddle/FastDeploy has many PaddlePaddle models; you can get familiar with TPU-MLIR by adapting the model. ( attention: converting the PaddlePaddle model to ONNX format first.).
- For technical details, please refer to: https://tpumlir.org/docs/developer_manual/index.html
- Any questions and suggestions are welcome; everyone can exchange opinions and learn together.