Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Client API support for shared memory #570

Merged
merged 26 commits into from
Aug 29, 2019
Merged
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
0d25464
Add Shared Memory Python client support
CoderHam Jul 23, 2019
62188c1
keep shared memory wrapper separate for now - will merge in future
CoderHam Aug 17, 2019
1d347a6
add shared memory helper functions to python API
CoderHam Aug 19, 2019
c4a13dd
fix docs
CoderHam Aug 20, 2019
ac9a5e6
move shared memory back out of regular client
CoderHam Aug 20, 2019
4802dba
modify shared memory python client with new API
CoderHam Aug 20, 2019
68061e6
WIP - read output_values into list of numpy arrays
CoderHam Aug 21, 2019
538b0fa
Python client integration
CoderHam Aug 21, 2019
28fd76b
fix set input shared memory
CoderHam Aug 21, 2019
6f3f426
support read output from shared memory
CoderHam Aug 21, 2019
41cc60c
- fix typos
CoderHam Aug 22, 2019
c0a772c
return result for shared memory like regular results
CoderHam Aug 22, 2019
146543b
migrate to using shared memory handle instead of base address
CoderHam Aug 23, 2019
c3bd8ac
Add python client to simple shared_memory test
CoderHam Aug 23, 2019
65d7819
move shared memory helpers to `tensorrtserver.shared_memory`
CoderHam Aug 23, 2019
fbcb727
fix segfault during shared memory handle creation
CoderHam Aug 23, 2019
ef46c11
move shared memory handle struct to a different header file
CoderHam Aug 24, 2019
71a2d8a
review edits
CoderHam Aug 26, 2019
9c3aef7
- use handle when possible - fix import for shared_memory (remove unn…
CoderHam Aug 27, 2019
f820b19
fix segfault due to empty handle
CoderHam Aug 28, 2019
4f84563
fix for windows build
CoderHam Aug 28, 2019
1ed4da3
use only handle to denote shared memory region - one shared memory re…
CoderHam Aug 28, 2019
98fc877
fix for batch_size != 1
CoderHam Aug 28, 2019
13919c0
remove dependency of shared_memory on request
CoderHam Aug 29, 2019
7a41f1a
review edits
CoderHam Aug 29, 2019
5d04920
fix raising exception
CoderHam Aug 29, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
modify shared memory python client with new API
  • Loading branch information
CoderHam committed Aug 29, 2019
commit 4802dba75e2f6bc5bf90680299302b3fe17a70b8
33 changes: 12 additions & 21 deletions src/clients/python/simple_shm_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,43 +92,34 @@
shm_key = "/output_simple"
CoderHam marked this conversation as resolved.
Show resolved Hide resolved
shm_fd_op = shared_memory_ctx.create_shared_memory_region(shm_key, output_byte_size * 2)

# TODO pythonize
output0_shm = shared_memory_ctx.map_shared_memory_region(shm_fd_op, 0, output_byte_size * 2)
output1_shm = (int*)(output0_shm + 16)

# Register Output shared memory with TRTIS
shared_memory_ctx.register("output_data", "/output_simple", 0, output_byte_size * 2)

shm_key = "/input_simple"
int shm_fd_ip = shared_memory_ctx.create_shared_memory_region(shm_key, input_byte_size * 2)
shm_fd_ip = shared_memory_ctx.create_shared_memory_region(shm_key, input_byte_size * 2)

# TODO pythonize
input0_shm = shared_memory_ctx.map_shared_memory_region(shm_fd_ip, 0, input_byte_size * 2)
input1_shm = (int*)(input0_shm + 16)
# Put input data values into shared memory
shared_memory_ctx.set_shared_memory_region_data(shm_fd_ip, 0, input0_data)
shared_memory_ctx.set_shared_memory_region_data(shm_fd_ip, 0, input1_data)

# Register Input shared memory with TRTIS
shared_memory_ctx.register("input_data", "/input_simple", 0, input_byte_size * 2)

# TODO put input data values into shared memory
err = input0->SetSharedMemory("input_data", 0, input_byte_size)
err = input1->SetSharedMemory("input_data", input_byte_size, input_byte_size)

# Send inference request to the inference server. Get results for
# both output tensors.
result = infer_ctx.run({ 'INPUT0' : (input0_data,),
'INPUT1' : (input1_data,) },
{ 'OUTPUT0' : InferContext.ResultFormat.RAW,
'OUTPUT1' : InferContext.ResultFormat.RAW },
result = infer_ctx.run({ 'INPUT0' : ("input_data", 0, input_byte_size),
'INPUT1' : ("input_data", input_byte_size, input_byte_size), },
{ 'OUTPUT0' : (InferContext.ResultFormat.RAW, "output_data", 0, output_byte_size),
'OUTPUT1' : (InferContext.ResultFormat.RAW, "output_data", output_byte_size, output_byte_size) },
batch_size)

# Read output from shared memory ([TODO] Convert return buffer to numpy array of respective datatype)
output0_data = shared_memory_ctx.read_shared_memory_region_data(shm_fd_op, 0, output_byte_size)
output1_data = shared_memory_ctx.read_shared_memory_region_data(shm_fd_op, output_byte_size, output_byte_size)

# We expect there to be 2 results (each with batch-size 1). Walk
# over all 16 result elements and print the sum and difference
# calculated by the model.

# TODO read from shared memory
output0_data = result['OUTPUT0'][0]
output1_data = result['OUTPUT1'][0]

for i in range(16):
print(str(input0_data[i]) + " + " + str(input1_data[i]) + " = " + str(output0_data[i]))
print(str(input0_data[i]) + " - " + str(input1_data[i]) + " = " + str(output1_data[i]))
Expand Down