-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I observe that the validation phase is much slower than the training phase on large validation sets and multi-GPU machines #13142
Comments
👋 Hello @ASharpSword, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. RequirementsPython>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit. Introducing YOLOv8 🚀We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀! Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects. Check out our YOLOv8 Docs for details and get started with: pip install ultralytics |
I am trying to generate val_loader with the same parameters as train_loader and remove the restriction that only the master process can create val_loader. Next, I undid the constraint that validate.run() should only be run by the master process, and I removed the tqdm from validate.run() so that the TQDMS don't interfere with each other and print too much information. However, these measures lead me to get some scattered validation results instead of a complete validation set. I had to combine partial validation set results from different GPU processes to get a complete validator result, I don't know if there is anything wrong with this, if so, I ask the author to point it out. |
I think I already know what I need to do, a training set of n GPU processes is split equally, but only the progress of the master i.e. process 0 is displayed, disguised as the overall progress with pbar = tqdm(total=nb). I could also disguise the total progress with the partial validation of the 0 process using pbar = tqdm(total=nb), but I would have to rewrite the mAP calculation and other subsequent processes to make them work for multiple processes. |
1 similar comment
I think I already know what I need to do, a training set of n GPU processes is split equally, but only the progress of the master i.e. process 0 is displayed, disguised as the overall progress with pbar = tqdm(total=nb). I could also disguise the total progress with the partial validation of the 0 process using pbar = tqdm(total=nb), but I would have to rewrite the mAP calculation and other subsequent processes to make them work for multiple processes. |
Hello, Thank you for your detailed observations and for sharing your approach to addressing the validation phase's performance on multi-GPU setups. Your insights are valuable and show a deep understanding of the underlying processes. Indeed, the validation phase in YOLOv5 currently runs on a single GPU, which can become a bottleneck, especially with large validation sets. Your idea of distributing the validation workload across multiple GPUs is a promising approach to mitigate this issue. Here are a few points to consider and some suggestions to help you refine your implementation:
For further details on multi-GPU training and validation, you can refer to the Multi-GPU Training Tutorial. Thank you again for your contributions and for pushing the boundaries of what's possible with YOLOv5. If you have any more questions or need further assistance, feel free to ask! |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
Hello, dear author. I observed that validation was very slow using only one GPU regardless of how many Gpus there were. Here's a question I'd like to ask from a novice perspective: why not make the validation part multi-GPU parallel as well? Is it impossible or unnecessary or you don't have time to do it? Since I was recently looking for a way to reduce the validation part of the time, I was wondering if there was an existing solution that could save me some time. If not, I'm trying multi-GPU parallel validation, just like multi-GPU training. Does this work? Please forgive me if I have caused any offence
Additional
No response
The text was updated successfully, but these errors were encountered: