Skip to content

Commit

Permalink
Fix L0_custom_ops (triton-inference-server#4873)
Browse files Browse the repository at this point in the history
  • Loading branch information
tanmayv25 authored and mc-nv committed Oct 4, 2022
1 parent b72fe10 commit 03dda11
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion qa/L0_custom_ops/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,10 @@ LD_LIBRARY_PATH=/opt/tritonserver/backends/pytorch:$LD_LIBRARY_PATH

# Pytorch
SERVER_ARGS="--model-repository=/data/inferenceserver/${REPO_VERSION}/qa_custom_ops/libtorch_custom_ops"
SERVER_LD_PRELOAD="/data/inferenceserver/${REPO_VERSION}/qa_custom_ops/libtorch_custom_ops/libtorch_modulo/custom_modulo.so"
# FIXME: Pre-loading the python library system to satisfy the symbol definitions
# as the custom op library is built with different python version within
# pytorch container. See DLIS-4152.
SERVER_LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libpython3.8.so.1:/data/inferenceserver/${REPO_VERSION}/qa_custom_ops/libtorch_custom_ops/libtorch_modulo/custom_modulo.so"
run_server
if [ "$SERVER_PID" == "0" ]; then
echo -e "\n***\n*** Failed to start $SERVER\n***"
Expand Down

0 comments on commit 03dda11

Please sign in to comment.