-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Issues: Lightning-AI/pytorch-lightning
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Loggers fails to create metrics.csv file when running on multiple TPU cores
bug
Something isn't working
help wanted
Open to be worked on
strategy: xla
ver: 2.2.x
#19035
opened Nov 20, 2023 by
javiergaitan
Calling Something isn't working
priority: 1
Medium priority task
strategy: ddp
DistributedDataParallel
strategy: xla
ver: 2.0.x
trainer.fit
twice with spawn strategies won't work as expected
bug
[TPU] Add Trainer support for PyTorch XLA FSDP
fabric
lightning.fabric.Fabric
feature
Is an improvement or enhancement
has conflicts
pl
Generic label for PyTorch Lightning package
strategy: fsdp
Fully Sharded Data Parallel
strategy: xla
TPU v3-8 deadlocks when using datasets larger than 2^15 on 8 devices
accelerator: tpu
Tensor Processing Unit
bug
Something isn't working
help wanted
Open to be worked on
strategy: xla
ver: 2.0.x
#18176
opened Jul 27, 2023 by
SebastianLoef
Support AMP with TPUs
fabric
lightning.fabric.Fabric
feature
Is an improvement or enhancement
precision: amp
Automatic Mixed Precision
strategy: xla
trainer
Integrating pytorch XLA when using multiple GPUs
accelerator: cuda
Compute Unified Device Architecture GPU
feature
Is an improvement or enhancement
help wanted
Open to be worked on
strategy: xla
ProTip!
Add no:assignee to see everything that’s not assigned.