-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Issues: Lightning-AI/pytorch-lightning
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
barebones
mode should be more forceful
feature
#18355
opened Aug 21, 2023 by
davidgilbertson
Support Is an improvement or enhancement
lightningmodule
pl.LightningModule
trainer: argument
channels_last
with training
feature
#15175
opened Oct 18, 2022 by
Queuecumber
Keep User-Defined Order of Callbacks
callback
discussion
In a discussion stage
trainer: argument
#15026
opened Oct 7, 2022 by
wistuba
RFC: Remove Includes a deprecation
environment
strategy: ddp
DistributedDataParallel
trainer: argument
num_nodes
Trainer argument and infer world size from cluster environment directly
deprecation
Add Is an improvement or enhancement
good first issue
Good for newcomers
trainer: argument
enable_device_summary
flag to disable device printout
callback
feature
Validation takes place every N time
feature
Is an improvement or enhancement
trainer: argument
trainer: fit
Allow Further information is requested
trainer: argument
extra_epochs
flag in Trainer.fit
to control finetuning time
question
#13273
opened Jun 12, 2022 by
franchesoni
[RFC] "auto" precision support
design
Includes a design discussion
feature
Is an improvement or enhancement
plugin
precision: amp
Automatic Mixed Precision
precision: apex (removed)
NVIDIA/apex precision
trainer: argument
[Trainer] dictionary access to multiple named loggers
feature
Is an improvement or enhancement
logger
Related to the Loggers
trainer: argument
A100 GPU MIG feature support for trainer
feature
Is an improvement or enhancement
trainer: argument
#10529
opened Nov 14, 2021 by
minwang-ai
[RFC] Default to infinite epochs, not 1000
help wanted
Open to be worked on
let's do it!
approved to implement
trainer: argument
Add trainer flag max_time_per_run
design
Includes a design discussion
feature
Is an improvement or enhancement
trainer: argument
[RFC] Deprecate Includes a design discussion
logging
Related to the `LoggerConnector` and `log()`
trainer: argument
log_every_n_steps
from Trainer
design
#9726
opened Sep 27, 2021 by
daniellepintz
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.