Skip to content

return BadRequest for all invlid inputs#4291

Open
lvhan028 wants to merge 3 commits intoInternLM:mainfrom
lvhan028:fix-protocol-check
Open

return BadRequest for all invlid inputs#4291
lvhan028 wants to merge 3 commits intoInternLM:mainfrom
lvhan028:fix-protocol-check

Conversation

@lvhan028
Copy link
Collaborator

No description provided.

Copilot AI review requested due to automatic review settings January 26, 2026 09:54
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adjusts OpenAI-compatible API validation to return 400 Bad Request for invalid inputs, and expands request parameter validation.

Changes:

  • Add validation for min_p range in generate/completions/chat-completions request checks.
  • Add validation for non-positive max_tokens / max_completion_tokens (and min_new_tokens for chat).
  • Change FastAPI RequestValidationError handling from HTTP 422 to HTTP 400; refactor OpenAI protocol model typing.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
lmdeploy/serve/openai/serving_generate.py Validate min_p range for /generate inputs.
lmdeploy/serve/openai/serving_completion.py Validate max_tokens/max_completion_tokens positivity and min_p range for /v1/completions.
lmdeploy/serve/openai/serving_chat_completion.py Validate max_tokens/max_completion_tokens/min_new_tokens and min_p range for /v1/chat/completions.
lmdeploy/serve/openai/protocol.py Refactor protocol type annotations; narrows chat messages typing.
lmdeploy/serve/openai/api_server.py Return HTTP 400 (instead of 422) for request validation errors.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant