Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sample many files more efficiently #49

Open
hotohoto opened this issue Dec 13, 2022 · 1 comment
Open

Sample many files more efficiently #49

hotohoto opened this issue Dec 13, 2022 · 1 comment

Comments

@hotohoto
Copy link

sample.py's implementation depends on K.evaluation.compute_features() and it generates n samples and return them in the memory at once. And it looks not efficient in terms of memoy usage. It should save a chunk of generate image files and free or reuse the memory instead before it generates and gathers all the samples in the memory.

@drscotthawley
Copy link

Yea I'm getting CUDA OOM errors after I sample my first batch because the second batch doubles the VRAM usage, and the 3rd batch triples it, etc. Wasn't expecting this behavior. For now, I guess I'll wrap sample.py in loop in a shell script and only let n = batch_size when calling sample.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants