-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
value in file_outputs is not being passed to input parameters correctly #957
Comments
I'm not sure why this is being expected. Each argument is a separate array entry. There is no special magical logic inserting It's generally not possible and not desired to join some arguments with So, the current behavior is by desing (by design of Linux, Kubernetes, Argo and KF Pipelines)
resolves to
Usually, the programs do not really require the In your case, though the program seems to require BTW, I see that you pass ['--port', 9000] instead of ['--port=9000']. If you pass ['--port', 9000], you get ['--port', 9000]. If you need to pass ['--port=9000'] then pass ['--port=9000']. Workaround:
|
Thanks for looking into this @Ark-kun. I applied the changes/workaround you suggested in the pipeline definition and the resulted pipeline yaml: - container:
args:
- --port=9000
- --model_name={{inputs.parameters.retrain-output}}
- --model_base_path
- s3://mybucket/models/myjob2/export/inception/
command:
- /usr/bin/tensorflow_model_server and the resulted pod created by the pipeline: spec:
containers:
- args:
- --port=9000
- --model_name=inception
- --model_base_path
- s3://mybucket/models/myjob2/export/inception/
command:
- /usr/bin/tensorflow_model_server But I'm still getting the same error from the serving pod: usage: /usr/bin/tensorflow_model_server
Flags:
--port=8500 int32 port to listen on
--enable_batching=false bool enable batching |
There might have been some mistake in the process. Can you please check it? P.S. You should probably pass the feedback to the program author that they should use more standard command-line parsing libraries that handle both |
@Ark-kun You are right! and that worked! Thanks alot for suggesting the changes. It might be useful to state that pipeline requires the images to use standard command-line parsing or have an example that passes in the param values with By the way the code itself is just tensorflow serving, which requires us to pass the values this way: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/main.cc#L92 |
Good to hear that everything is working now.
KF Pipelines do not impose any requirements on the program or it's command-line parsing. It does not change the command-line in any way. If you use separate arguments, it will pass them separately. If you use
That's a good idea. We should do that. |
…ing/node-license-tools (kubeflow#957) Bumps [path-parse](https://github.com/jbgutierrez/path-parse) from 1.0.6 to 1.0.7. - [Release notes](https://github.com/jbgutierrez/path-parse/releases) - [Commits](https://github.com/jbgutierrez/path-parse/commits/v1.0.7) --- updated-dependencies: - dependency-name: path-parse dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
What happened:
String value
inception
was written into a file ritazh/k8s-ml@be7f520 and it's set as the file_output:file_outputs={'output': '/model_name.txt'}
. The file content is then passed in as an input parameter here:Expected:
op2 pod will run with the argument
--model_name=inception
as the file content written in/model_name.txt
is passed into op2 fromop1.output
.Actual:
kubectl describe po
Job Output:
Hack that works:
kubectl describe po:
The entire pipeline definition with the hack:
https://github.com/ritazh/pipelines/blob/fa2a3de2169ce9e8ceef36911ca5c6a4c2199d0d/samples/azure/retrain-pipeline.py#L87-L89
The text was updated successfully, but these errors were encountered: