Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add script to capture triton config #116

Merged
1 commit merged into from
May 27, 2022

Conversation

pdmack
Copy link
Contributor

@pdmack pdmack commented May 19, 2022

Utility script to render a "starter" Triton config from onnx, tf or trt "bare" models.

Mix of bash, jq, sed, and awk. Over time could be replaced by python with proper json and protobuf libraries. Best effort attempt to preserve gpu, cpu accelerator and optional parameters fields if the model in fact has its own config.pbtxt.

From the help:

This script retrieves a computed model configuration from NVIDIA Triton. The Triton
Inference Server can be launched with "--strict-model-config=false" which means it
will create a minimal Triton configuration (config.pbtxt) from the required
elements of a model if one has not been provided.

This only applies to TensorRT, TensorFlow saved-model, and ONNX models. Once the
initial configuration file is derived, it is expected that optional and advanced
settings will be applied by hand.

By default, the script will take the JSON output from Triton and reformat it
to stdout for usage as a new configuration (i.e., save stdout to config.pbtxt).

Developed and tested with the 22.04 release of Triton using Morpheus ONNX and FIL
sample models only.

For more information on Triton model configuration, please refer to the online docs:
https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md
https://github.com/triton-inference-server/common/blob/main/protobuf/model_config.proto

This script is intended for initial bootstrapping of a basic Triton model configuration.
Consult the Triton Model Navigator for advanced model optimization tooling.
https://github.com/triton-inference-server/model_navigator/blob/main/docs/optimize_for_triton.md

@pdmack pdmack added enhancement Additional functionality added to an existing feature non-breaking Non-breaking change improvement Improvement to existing functionality 3 - Ready for Review labels May 19, 2022
@pdmack pdmack requested a review from a team as a code owner May 23, 2022 21:57
@mdemoret-nv
Copy link
Contributor

@gpucibot merge

@ghost ghost merged commit 2710f93 into nv-morpheus:branch-22.06 May 27, 2022
@pdmack pdmack deleted the pdmack_triton-script branch June 6, 2022 13:33
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Additional functionality added to an existing feature improvement Improvement to existing functionality non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants