Skip to content

[RFC]: Enhancing LoRA Management for Production Environments in vLLM #6275

@Jeffwan

Description

@Jeffwan

This RFC proposes improvements to the management of Low-Rank Adaptation (LoRA) in vLLM to make it more suitable for production environments. This proposal aims to address several pain points observed in the current implementation. Feedback and discussions are welcome, and we hope to gather input and refine the proposal based on community insights.

Motivation.

This RFC proposes improvements to the management of Low-Rank Adaptation (LoRA) in vLLM to make it more suitable for production environments. This proposal aims to address several pain points observed in the current implementation. Feedback and discussions are welcome, and we hope to gather input and refine the proposal based on community insights.

Motivation

LoRA integration in production environments faces several challenges that need to be addressed to ensure smooth and efficient deployment and management. The main issues observed include:

  1. Visibility of LoRA Information: Currently, the relationship between LoRA and base models is not exposed clearly by the engine. The /v1/models endpoint does not display this information. Related issues: [Feature]: Expose Lora lineage information from /v1/models #6274

  2. Dynamic Loading and Unloading: LoRA adapters cannot be dynamically loaded or unloaded after the server has started. Related issues: Multi-LoRA - Support for providing /load and /unload API #3308 [Feature]: Allow LoRA adapters to be specified as in-memory dict of tensors #4068 [Feature]: load/unload API to run multiple LLMs in a single GPU instance #5491

  3. Remote Registry Support: LoRA adapters cannot be pulled from remote model repositories during runtime, making it cumbersome to manage artifacts locally. Related issues: [Feature]: Support loading lora adapters from HuggingFace in runtime #6233 [Bug]: relative path doesn't work for Lora adapter model #6231

  4. Observability: There is a lack of metrics and observability enhancements related to LoRA, making it difficult to monitor and manage.

  5. Cluster level Support: Information about LoRA is not easily accessible to resource managers, hindering support for service discovery, load balancing, and scheduling in cluster environments. Related issues: [RFC]: Add control panel support for vLLM #4873

Proposed Change.

1. Support Dynamically Loading or Unloading LoRA Adapters

To enhance flexibility and manageability, we propose introducing the ability to dynamically load and unload LoRA adapters at runtime.

  • Expose /v1/add_adapter and /v1/remove_adapter in api_server.py.
  • Introducing lazy and eager loading modes for LoRA adapters will provide more flexibility in deployment strategies. If lazy mode is selected, we can simply add lora to LoraRequest, otherwise, we should let the engine to load the lora via lora_manager explicitly.

2. Load LoRA Adapters from Remote Storage

Enabling LoRA adapters to be loaded from remote storage during runtime will simplify artifact management and deployment processes. The technical detail could be adding get_adapter_absolute_path ,

  • it can expand relative path
  • It can download hugging face models and return the snapshot path
  • Refactor the lora path reference from loral_local_path to local_path

3. Build Better LoRA Model Lineage

To improve the visibility and management of LoRA models, we propose building a more robust model lineage metadata. This system will:

4. Lora Observability enhancement

Improving observability by adding metrics specific to LoRA will help in better monitoring and management. Proposed metrics include:

  • Loading and unloading times for LoRA adapters.
  • Memory and compute resource usage by LoRA adapters.
  • Performance impact on base models when using LoRA adapters.

5. Control Plane support(service discovery, load balancing, scheduling) for Loras

Since vLLM community focus more on the inference engine, the cluster level features would be a separate design I am working on in Kubernetes WG-Serving. I will link back to this issue shortly.

PR List

Feedback Period.

No response

CC List.

@simon-mo @Yard1

Note: Please help tag the right person who worked in this area.

Any Other Things.

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    RFCstaleOver 90 days of inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions