Description
🐛 Describe the bug
When running the torchchat.py
CLI, even with --help
, it can take several seconds. This is caused by the eager imports of expensive packages (particularly torch
) which are unnecessary for some of the functionality, but are imported regardless, even before constructing the argparse
parser.
The proposed fix is to move all of the expensive imports so that they happen just-in-time. This would enable --help
(and list
/where
) to run without needing to process the expensive imports first.
Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241002
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:48:38) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241002
[pip3] torchao==0.5.0
[pip3] torchtune==0.4.0.dev20241010+cpu
[pip3] torchvision==0.20.0.dev20241002
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.6.0.dev20241002 pypi_0 pypi
[conda] torchao 0.5.0 pypi_0 pypi
[conda] torchtune 0.4.0.dev20241010+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20241002 pypi_0 pypi