Skip to content

Commit f7430b7

Browse files
committed
serving transcription: fix type hints
Signed-off-by: Daniele Trifirò <dtrifiro@redhat.com>
1 parent fb8bfd1 commit f7430b7

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

vllm/entrypoints/openai/serving_transcription.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ async def create_transcription(
265265
logger.exception("Error in preprocessing prompt inputs")
266266
return self.create_error_response(str(e))
267267

268-
result_generator: AsyncGenerator[RequestOutput, None] = None
268+
result_generator: AsyncGenerator[RequestOutput, None] | None = None
269269
try:
270270
# TODO(rob): subtract len of tokenized prompt.
271271
default_max_tokens = self.model_config.max_model_len

0 commit comments

Comments
 (0)