Skip to content

Issues: Lightning-AI/pytorch-lightning

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Confusing recommendation to use sync_dist=True even with TorchMetrics bug Something isn't working help wanted Open to be worked on logging Related to the `LoggerConnector` and `log()` ver: 2.2.x
#20153 opened Aug 2, 2024 by srprca
Support restoring callbacks' status when predicting feature Is an improvement or enhancement help wanted Open to be worked on
#20137 opened Jul 29, 2024 by zihaozou
OptimizerLRScheduler typing does not fit examples bug Something isn't working example help wanted Open to be worked on ver: 2.2.x
#20106 opened Jul 19, 2024 by MalteEbner
training time increase epoch by epoch bug Something isn't working help wanted Open to be worked on performance repro needed The issue is missing a reproducible example ver: 2.2.x
#20076 opened Jul 12, 2024 by Eric-Lin-CVTE
enable loading universal checkpointing checkpoint in DeepSpeedStrategy feature Is an improvement or enhancement help wanted Open to be worked on strategy: deepspeed
#20065 opened Jul 9, 2024 by zhoubay
trainer.test() with given checkpoint logs last epoch instead of checkpoint epoch bug Something isn't working help wanted Open to be worked on repro needed The issue is missing a reproducible example
#20052 opened Jul 5, 2024 by markussteindl
[Fabric Lightning] Named barriers distributed Generic distributed-related topic feature Is an improvement or enhancement help wanted Open to be worked on
#20027 opened Jun 28, 2024 by tesslerc
Add truncated backpropagation through time (TBPTT) example docs Documentation related help wanted Open to be worked on
#19985 opened Jun 17, 2024 by svnv-svsv-jm
Another profiling tool is already active bug Something isn't working help wanted Open to be worked on profiler ver: 2.2.x
#19983 opened Jun 17, 2024 by zhaohm14
Documentation: writing custom samplers compatible with multi GPU training docs Documentation related help wanted Open to be worked on
#19964 opened Jun 10, 2024 by fteufel
Returning num_replicas=world_size when using distributed sampler in ddp distributed Generic distributed-related topic duplicate This issue or pull request already exists feature Is an improvement or enhancement help wanted Open to be worked on strategy: ddp DistributedDataParallel
#19961 opened Jun 9, 2024 by arjunagarwal899
Apply the ignore of the save_hyperparameters function to args feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on
#19761 opened Apr 11, 2024 by doveppp
How to use BackboneFinetuning callback? callback: finetuning docs Documentation related help wanted Open to be worked on
#19711 opened Mar 28, 2024 by Antoine101
Docs don't render LaTeX formulas docs Documentation related good first issue Good for newcomers help wanted Open to be worked on
#19633 opened Mar 15, 2024 by zichunxx
Does Trainer(devices=1) use all CPUs? good first issue Good for newcomers help wanted Open to be worked on question Further information is requested ver: 2.2.x
#19595 opened Mar 7, 2024 by MaximilienLC
Potential off by 1 error when resuming training of mid-epoch checkpoint bug Something isn't working help wanted Open to be worked on loops Related to the Loop API ver: 2.1.x
#19367 opened Jan 29, 2024 by ivnle
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.