Skip to content

Commit

Permalink
Add max_enqueued_batches option for model servers.
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 639219039
  • Loading branch information
deqiangc authored and tensorflow-copybara committed Jun 1, 2024
1 parent d914192 commit 67a2dcb
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -234,8 +234,6 @@ absl::Status TfrtSavedModelFactory::CreateTfrtSavedModelWithMetadata(
compile_options.use_gpu_compile_and_execute_op =
config_.tfrt_use_fused_gpu_op();
compile_options.min_num_batch_threads = config_.tfrt_min_num_batch_threads();
compile_options.min_max_enqueued_batches =
config_.tfrt_min_max_enqueued_batches();

options.graph_execution_options.run_placer_grappler_on_functions =
config_.run_placer_grappler_on_functions();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -208,9 +208,6 @@ message TfrtSavedModelConfig {
// Whether to enable paging. This should only be true when using Pathways
// backend.
bool enable_paging = 2022;

// The minimum of the maximum number of outstanding enqueue batches
int64 tfrt_min_max_enqueued_batches = 2023;
}

// Config proto for TfrtSavedModelSourceAdapter.
Expand Down

0 comments on commit 67a2dcb

Please sign in to comment.