You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 6GB Laptop GPU
Nvidia driver version: 561.00
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[conda] Could not collect
Information
The official example scripts
My own modified scripts
🐛 Describe the bug
When executing the llama command, a ModuleNotFoundError is thrown for termios.
Termios is only available for Linux and, msvcrt should be used on Windows Systems.
Error logs
Traceback (most recent call last):
File "", line 198, in run_module_as_main
File "", line 88, in run_code
File "C:\Users\brian\python\Scripts\llama.exe_main.py", line 4, in
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack_init.py", line 7, in
from llama_stack.distribution.library_client import ( # noqa: F401
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\library_client.py", line 34, in
from llama_stack.distribution.build import print_pip_install_help
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\build.py", line 23, in
from llama_stack.distribution.utils.exec import run_with_pty
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\utils\exec.py", line 10, in
import pty
File "C:\Users\brian\python\Lib\pty.py", line 12, in
import tty
File "C:\Users\brian\python\Lib\tty.py", line 5, in
from termios import *
ModuleNotFoundError: No module named 'termios'
Expected behavior
llama stack command executes normally with the basic help text displayed:
System Info
:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
Collecting environment information...
PyTorch version: 2.5.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 6GB Laptop GPU
Nvidia driver version: 561.00
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2300
DeviceID=CPU0
Family=1
L2CacheSize=18432
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2500
Name=Intel(R) Core(TM) Ultra 9 185H
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] torch==2.5.1+cu118
[pip3] torchaudio==2.5.1+cu118
[pip3] torchvision==0.20.1+cu118
[conda] Could not collect
Information
🐛 Describe the bug
When executing the llama command, a ModuleNotFoundError is thrown for termios.
Termios is only available for Linux and, msvcrt should be used on Windows Systems.
Error logs
Traceback (most recent call last):
File "", line 198, in run_module_as_main
File "", line 88, in run_code
File "C:\Users\brian\python\Scripts\llama.exe_main.py", line 4, in
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack_init.py", line 7, in
from llama_stack.distribution.library_client import ( # noqa: F401
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\library_client.py", line 34, in
from llama_stack.distribution.build import print_pip_install_help
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\build.py", line 23, in
from llama_stack.distribution.utils.exec import run_with_pty
File "C:\Users\brian\dev_space\Agentic Capital\llama-stack-orig\llama_stack\distribution\utils\exec.py", line 10, in
import pty
File "C:\Users\brian\python\Lib\pty.py", line 12, in
import tty
File "C:\Users\brian\python\Lib\tty.py", line 5, in
from termios import *
ModuleNotFoundError: No module named 'termios'
Expected behavior
llama stack command executes normally with the basic help text displayed:
PS C:\Users\brian\dev_space\Agentic Capital\llama-stack> llama stack
usage: llama [-h] {model,stack,download,verify-download} ...
Welcome to the Llama CLI
options:
-h, --help show this help message and exit
subcommands:
{model,stack,download,verify-download}
PS C:\Users\brian\dev_space\Agentic Capital\llama-stack>
The text was updated successfully, but these errors were encountered: