Skip to content

Commit

Permalink
Fix broken links pointing to the grpc_server.cc file (triton-infere…
Browse files Browse the repository at this point in the history
  • Loading branch information
matemijolovic authored Jul 17, 2023
1 parent a8f122d commit 1e805ae
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions docs/customization_guide/inference_protocols.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ All capabilities of Triton server are encapsulated in the shared
library and are exposed via the Server API. The `tritonserver`
executable implements HTTP/REST and GRPC endpoints and uses the Server
API to communicate with core Triton logic. The primary source files
for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and
for the endpoints are [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and
[http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc). In these source files you can
see the Server API being used.

Expand Down Expand Up @@ -376,7 +376,7 @@ A simple example using the C API can be found in
found in the source that implements the HTTP/REST and GRPC endpoints
for Triton. These endpoints use the C API to communicate with the core
of Triton. The primary source files for the endpoints are
[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc) and
[grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc) and
[http_server.cc](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc).

## Java bindings for In-Process Triton Server API
Expand Down
2 changes: 1 addition & 1 deletion docs/user_guide/decoupled_models.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ how the gRPC streaming can be used to infer decoupled models.
If using [Triton's in-process C API](../customization_guide/inference_protocols.md#in-process-triton-server-api),
your application should be cognizant that the callback function you registered with
`TRITONSERVER_InferenceRequestSetResponseCallback` can be invoked any number of times,
each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc_server.cc)
each time with a new response. You can take a look at [grpc_server.cc](https://github.com/triton-inference-server/server/blob/main/src/grpc/grpc_server.cc)

### Knowing When a Decoupled Inference Request is Complete

Expand Down

0 comments on commit 1e805ae

Please sign in to comment.