Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add onnx and python models back to L0_infer_valgrind test #5502

Merged
merged 10 commits into from
Mar 15, 2023

Conversation

krishung5
Copy link
Contributor

@krishung5 krishung5 commented Mar 14, 2023

Python models: Doesn't cause server hanging, only has OOM issue when loading all models at once.
Onnx models: It takes about 12 hours to load all the models. At some point the server stuck for a while when this line is called, but it will resume eventually. Reduce the instance count and the number of models being loaded help with the case.
Hence, modify to test with python and onnx models in separate runs.

qa/L0_infer/test.sh Outdated Show resolved Hide resolved
qa/L0_infer/test.sh Show resolved Hide resolved
@krishung5 krishung5 merged commit b79c3b8 into main Mar 15, 2023
@krishung5 krishung5 deleted the krish-server-hang branch March 16, 2023 05:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants