Skip to content

Commit

Permalink
Add trace endpoints. Add trace log frequency (triton-inference-server…
Browse files Browse the repository at this point in the history
…#3849)

* Add overview of the trace API and new trace setting

* Address comment

* Address comment. Document model / version specification detail

* Remove version specification from documentation

* Update Tracer interface for trace protocol. Add HTTP trace endpoint

* Add test

* Add GRPC trace endpoint. Add trace to supported extensions

* Clean up

* Update trace clear logic

* Update trace lifecycle management

* Add trace count

* Update trace count logic

* Update API naming

* Address comment

* Update trace count to log traces when the count is met

* Add test with Python client library
  • Loading branch information
GuanLuo authored Feb 8, 2022
1 parent 6618b7c commit 4b23fe7
Show file tree
Hide file tree
Showing 14 changed files with 2,551 additions and 356 deletions.
1 change: 1 addition & 0 deletions docs/protocol/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ plus several extensions that are defined in the following documents:
- [Sequence extension](./extension_sequence.md)
- [Shared-memory extension](./extension_shared_memory.md)
- [Statistics extension](./extension_statistics.md)
- [Trace extension](./extension_trace.md)

For the GRPC protocol the [protobuf
specification](https://github.com/triton-inference-server/common/blob/main/protobuf/grpc_service.proto)
Expand Down
197 changes: 197 additions & 0 deletions docs/protocol/extension_trace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
<!--
# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of NVIDIA CORPORATION nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Trace Extension

This document describes Triton's trace extension. The trace extension enables
the client to configure the trace settings during a Triton run. Because this
extension is supported, Triton reports “trace” in the extensions field of
its Server Metadata.

## HTTP/REST

In all JSON schemas shown in this document $number, $string, $boolean,
$object and $array refer to the fundamental JSON types. #optional
indicates an optional JSON field.

Triton exposes the trace endpoint at the following URL. The client may use
HTTP GET request to retrieve the current trace setting. A HTTP POST request
will modify the trace setting, and the endpoint will return the updated trace
setting on success or an error in the case of failure. Optional model name
can be provided to get or to set the trace settings for specific model.

```
GET v2[/models/${MODEL_NAME}]/trace/setting
POST v2[/models/${MODEL_NAME}]/trace/setting
```

### Trace Setting Response JSON Object

A successful trace setting request is indicated by a 200 HTTP status
code. The response object, identified as $trace_setting_response, is
returned in the HTTP body for every successful trace setting request.

```
$trace_setting_response =
{
$trace_setting, ...
}
$trace_setting = $string : $string | [ $string, ...]
```

Each $trace_setting JSON describes a “name”/”value” pair, where the “name” is
the name of the trace setting and the “value” is a $string representation of the
setting value, or an array of $string for some settings. Currently the following
trace settings are defined:

- "trace_file" : the file where the trace output will be saved. If
"log frequency" is set, this will be the prefix of the files to save the
trace output, resulting files in name "${trace_file}.0", "${trace_file}.1"...,
see trace setting "log frequency" below for detail.
- "trace_level" : the trace level. "OFF" to disable tracing,
TIMESTAMPS" to trace timestamps, "TENSORS" to trace tensors.
This value is an array of string whhere user may specify multiple levels to
trace multiple informations.
- "trace_rate" : the trace sampling rate. The value represents how many requests
will one trace be sampled from. For example, if the trace rate is "1000",
1 trace will be sampled for every 1000 requests.
- "trace_count" : the number of remaining traces to be sampled. Once the value
becomes "0", no more traces will be sampled for the trace setting, and the
collected traces will be written to indexed trace file in the format described
in "log_frequency", regardless of the "log_frequencey" status.
If the value is "-1", the number of traces to be sampled will not be limited.
- "log_frequency" : the frequency that Triton will log the
trace output to the files. If the value is "0", Triton will only log
the trace output to ${trace_file} when shutting down. Otherwise, Triton will log
the trace output to ${trace_file}.${idx} when it collects
the specified number of traces. For example, if the log frequency is "100",
when Triton collects the 100-th trace, it logs the traces to file
"${trace_file}.0", and when it collects the 200-th trace, it logs the 101-th to
the 200-th traces to file "${trace_file}.1". Note that the file index will be
reset to 0 when "trace_file" setting is updated.


### Trace Setting Response JSON Error Object

A failed trace setting request will be indicated by an HTTP error status
(typically 400). The HTTP body must contain the
$trace_setting_error_response object.

```
$trace_setting_error_response =
{
"error": $string
}
```

- “error” : The descriptive message for the error.

#### Trace Setting Request JSON Object

A trace setting request is made with a HTTP POST to
the trace endpoint. In the corresponding response the HTTP body contains the
response JSON. A successful request is indicated by a 200 HTTP status code.

The request object, identified as $trace_setting_request must be provided in the HTTP
body.

```
$trace_setting_request =
{
$trace_setting, ...
}
```

The $trace_setting JSON is defined in
[Trace Setting Response JSON Object](#Trace-Setting-Response-JSON-bject), only the specified
settings will be updated. In additon to the values mentioned in response JSON
object, JSON null value may be used to remove the specification of
the trace setting. In such case, the current global setting will be used.
Similarly, if this is the first request to initalize a model trace settings,
for the trace settings that are not specified in the request, the current global
setting will be used.

## GRPC

For the trace extension Triton implements the following API:

```
service GRPCInferenceService
{
// Update and get the trace setting of the Triton server.
rpc TraceSetting(TraceSettingRequest)
returns (TraceSettingResponse) {}
}
```

The Trace Setting API returns the latest trace settings. Errors are indicated
by the google.rpc.Status returned for the request. The OK code
indicates success and other codes indicate failure. The request and
response messages for Trace Setting are:

```
message TraceSettingRequest
{
// The values to be associated with a trace setting.
// If no value is provided, the setting will be clear and
// the global setting value will be used.
message SettingValue
{
repeated string value = 1;
}
// The new setting values to be updated,
// settings that are not specified will remain unchanged.
map<string, SettingValue> settings = 1;
// The name of the model to apply the new trace settings.
// If not given, the new settings will be applied globally.
string model_name = 2;
}
message TraceSettingResponse
{
message SettingValue
{
repeated string value = 1;
}
// The latest trace settings.
map<string, SettingValue> settings = 1;
}
```

The trace settings are mentioned in
[Trace Setting Response JSON Object](#Trace-Setting-Response-JSON-bject).
Note that if this is the first request to initalize
a model trace settings, for the trace settings that are not specified
in the request, the value will be copied from the current global settings.
7 changes: 6 additions & 1 deletion docs/trace.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<!--
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
# Copyright 2019-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
Expand Down Expand Up @@ -42,6 +42,11 @@ this example every 100-th inference request will be traced. The
--trace-level option indicates the level of trace detail that should
be collected. Use the --help option to get more information.

In addition to configure trace settings in command line arguments, The user may
modify the trace setting when Triton server
is running via the trace APIs, more information can be found in [trace
protocol](protocol/extension_trace.md).

## JSON Trace Output

The trace output is a JSON file with the following schema.
Expand Down
111 changes: 86 additions & 25 deletions qa/L0_cmdline_trace/test.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
# Copyright (c) 2019-2022, NVIDIA CORPORATION. All rights reserved.
# Copyright 2019-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
Expand Down Expand Up @@ -294,6 +294,68 @@ fi

set -e

# trace-rate == 6, trace-level=TIMESTAMPS, trace-log-frequency == 2
SERVER_ARGS="--http-thread-count=1 --trace-file=trace_frequency.log \
--trace-level=TIMESTAMPS --trace-rate=6 \
--trace-log-frequency=2 --model-repository=$MODELSDIR"
SERVER_LOG="./inference_server_frequency.log"
run_server
if [ "$SERVER_PID" == "0" ]; then
echo -e "\n***\n*** Failed to start $SERVER\n***"
cat $SERVER_LOG
exit 1
fi

set +e

for p in {1..10}; do
$SIMPLE_HTTP_CLIENT >> client_frequency.log 2>&1
if [ $? -ne 0 ]; then
RET=1
fi

$SIMPLE_GRPC_CLIENT >> client_frequency.log 2>&1
if [ $? -ne 0 ]; then
RET=1
fi
done

set -e

kill $SERVER_PID
wait $SERVER_PID

set +e

# Two trace files
$TRACE_SUMMARY -t trace_frequency.log.0 > summary_frequency.log.0
if [ `grep -c "COMPUTE_INPUT_END" summary_frequency.log.0` != "2" ]; then
cat summary_frequency.log.0
echo -e "\n***\n*** Test Failed\n***"
RET=1
fi

if [ `grep -c ^simple summary_frequency.log.0` != "2" ]; then
cat summary_frequency.log.0
echo -e "\n***\n*** Test Failed\n***"
RET=1
fi

$TRACE_SUMMARY -t trace_frequency.log.1 > summary_frequency.log.1
if [ `grep -c "COMPUTE_INPUT_END" summary_frequency.log.1` != "1" ]; then
cat summary_frequency.log.1
echo -e "\n***\n*** Test Failed\n***"
RET=1
fi

if [ `grep -c ^simple summary_frequency.log.1` != "1" ]; then
cat summary_frequency.log.1
echo -e "\n***\n*** Test Failed\n***"
RET=1
fi

set -e

# trace-rate == 9, trace-level=TIMESTAMPS
SERVER_ARGS="--http-thread-count=1 --trace-file=trace_9.log \
--trace-level=TIMESTAMPS --trace-rate=9 --model-repository=$MODELSDIR"
Expand Down Expand Up @@ -397,18 +459,17 @@ if [ `grep -c "COMPUTE_INPUT_END" summary_ensemble.log` != "7" ]; then
echo -e "Ensemble trace log expects 7 compute"
RET=1
fi
# For GRPC frontend, its handlers will occupy one trace ID on creation
GRPC_ID_OFFSET=3

for trace_str in \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"model_name\":\"simple\",\"model_version\":1}" \
"{\"id\":$((GRPC_ID_OFFSET+2)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+3)),\"model_name\":\"fan_${MODELBASE}\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+4)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+5)),\"model_name\":\"${MODELBASE}\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+6)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+7)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+8)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+9)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" ; do
"{\"id\":1,\"model_name\":\"simple\",\"model_version\":1}" \
"{\"id\":2,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":3,\"model_name\":\"fan_${MODELBASE}\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":4,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":5,\"model_name\":\"${MODELBASE}\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":6,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":7,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":8,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":9,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" ; do
if [ `grep -c ${trace_str} trace_ensemble.log` != "1" ]; then
echo -e "Ensemble trace log expects trace: ${trace_str}"
RET=1
Expand Down Expand Up @@ -464,15 +525,15 @@ if [ `grep -c "COMPUTE_INPUT_END" summary_ensemble_tensor.log` != "7" ]; then
RET=1
fi
for trace_str in \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"model_name\":\"simple\",\"model_version\":1}" \
"{\"id\":$((GRPC_ID_OFFSET+2)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+3)),\"model_name\":\"fan_${MODELBASE}\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+4)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+5)),\"model_name\":\"${MODELBASE}\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+6)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+7)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+3))}" \
"{\"id\":$((GRPC_ID_OFFSET+8)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" \
"{\"id\":$((GRPC_ID_OFFSET+9)),\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":$((GRPC_ID_OFFSET+1))}" ; do
"{\"id\":1,\"model_name\":\"simple\",\"model_version\":1}" \
"{\"id\":2,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":3,\"model_name\":\"fan_${MODELBASE}\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":4,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":5,\"model_name\":\"${MODELBASE}\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":6,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":7,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":3}" \
"{\"id\":8,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" \
"{\"id\":9,\"model_name\":\"nop_TYPE_INT32_-1\",\"model_version\":1,\"parent_id\":1}" ; do
if [ `grep -c ${trace_str} trace_ensemble_tensor.log` != "1" ]; then
echo -e "Ensemble trace tensors log expects trace: ${trace_str}"
RET=1
Expand All @@ -496,10 +557,10 @@ if [ `grep -o TENSOR_BACKEND_OUTPUT trace_ensemble_tensor.log | wc -l` != "14" ]
fi

for trace_str in \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"activity\":\"TENSOR_QUEUE_INPUT\",\"tensor\":{\"name\":\"INPUT0\",\"data\":\"0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"activity\":\"TENSOR_QUEUE_INPUT\",\"tensor\":{\"name\":\"INPUT1\",\"data\":\"1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"activity\":\"TENSOR_BACKEND_OUTPUT\",\"tensor\":{\"name\":\"OUTPUT0\",\"data\":\"1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":$((GRPC_ID_OFFSET+1)),\"activity\":\"TENSOR_BACKEND_OUTPUT\",\"tensor\":{\"name\":\"OUTPUT1\",\"data\":\"-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" ; do
"{\"id\":1,\"activity\":\"TENSOR_QUEUE_INPUT\",\"tensor\":{\"name\":\"INPUT0\",\"data\":\"0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":1,\"activity\":\"TENSOR_QUEUE_INPUT\",\"tensor\":{\"name\":\"INPUT1\",\"data\":\"1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":1,\"activity\":\"TENSOR_BACKEND_OUTPUT\",\"tensor\":{\"name\":\"OUTPUT0\",\"data\":\"1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" \
"{\"id\":1,\"activity\":\"TENSOR_BACKEND_OUTPUT\",\"tensor\":{\"name\":\"OUTPUT1\",\"data\":\"-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14\",\"shape\":\"1,16\",\"dtype\":\"INT32\"}}" ; do
if [ `grep -c ${trace_str} trace_ensemble_tensor.log` != "1" ]; then
echo -e "Ensemble trace tensors log expects trace: ${trace_str}"
RET=1
Expand Down
Loading

0 comments on commit 4b23fe7

Please sign in to comment.