Skip to content

Commit

Permalink
Fix L0_backend_python (triton-inference-server#4207)
Browse files Browse the repository at this point in the history
* Fix L0_backend_python

* Review edit
  • Loading branch information
Tabrizian committed Apr 14, 2022
1 parent 81fd197 commit f4e16b9
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion qa/L0_backend_python/python_unittest.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,12 @@ def test_python_unittest(self):

if model_name == 'bls' or model_name == 'bls_memory' or model_name == 'bls_memory_async':
# For these tests, the memory region size will be grown. Because of
# this we need to use the shared memory probe only on the second
# this we need to use the shared memory probe only on the later
# call so that the probe can detect the leak correctly.
self._run_unittest(model_name)

# [FIXME] See DLIS-3684
self._run_unittest(model_name)
with self._shm_leak_detector.Probe() as shm_probe:
self._run_unittest(model_name)
else:
Expand Down

0 comments on commit f4e16b9

Please sign in to comment.