-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent pod name with parallelization enabled #10237
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is a mentoring request, please provide an update here. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is a mentoring request, please provide an update here. Thank you for your contributions. |
bump |
Pre-requisites
:latest
What happened/what you expected to happen?
I am noticing some inconsistent behavior for the pod names calculation when using parallelization with v1 POD_NAME spec. I have parallelization enabled to run 5 max workflows at the same time. I submit 20 workflows at the same time, the first 5 calculate pod names for all tasks without the template name(according to v1 spec), after that every workflow that was stuck in the Pending state appends template name to the pod name as well(v2 spec). Argo UI doesn't show this, but when you monitor the pods using kubectl, you can see the name of the pods being created after the initial 5 workflows will have template names appended to it(v2 spec)
Version
3.4.0
Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.
Logs from the workflow controller
Logs from in your workflow's wait container
The text was updated successfully, but these errors were encountered: