Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage Stats Collection #2852

Merged
merged 62 commits into from
Mar 29, 2024
Merged
Show file tree
Hide file tree
Changes from 40 commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
e0e1386
Write info to local json
yhu422 Feb 8, 2024
739f4a1
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Feb 8, 2024
c33b4cc
add usage context
yhu422 Feb 9, 2024
b74e3a6
removed usage_context from Engine_args
yhu422 Feb 9, 2024
c988e07
Move IO to another process
yhu422 Feb 9, 2024
88c5187
added http request
yhu422 Feb 13, 2024
85adbab
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Feb 13, 2024
33c9dff
Added additional arg for from_engine_args
yhu422 Feb 13, 2024
ad609f0
comments
yhu422 Feb 13, 2024
8a2f18a
Write info to local json
yhu422 Feb 8, 2024
8e9e5be
Merge branch 'usage' of https://github.com/yhu422/vllm into usage
yhu422 Feb 13, 2024
f537692
Added Comments
yhu422 Feb 13, 2024
0f1ba7f
.
yhu422 Feb 13, 2024
abc3948
Collect usage info on engine initialization
yhu422 Feb 8, 2024
ec54145
Merge branch 'usage' of https://github.com/yhu422/vllm into usage
yhu422 Feb 13, 2024
f84ccaa
Write usage to local file for testing
yhu422 Feb 13, 2024
b08ba86
Fixed Formatting
yhu422 Feb 13, 2024
83ff459
Merge branch 'vllm-project:main' into usage
yhu422 Feb 13, 2024
73b689a
formatting changes
yhu422 Feb 13, 2024
86da72f
Merge branch 'usage' of https://github.com/yhu422/vllm into usage
yhu422 Feb 13, 2024
9c9a188
Minor bug fixed
yhu422 Feb 13, 2024
d2f84cf
tmp
yhu422 Feb 13, 2024
4e888e0
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Feb 13, 2024
eb48061
Fixed Bug
yhu422 Feb 14, 2024
0684c06
Add Google Cloud Run service URL
yhu422 Feb 14, 2024
8e9890e
More GPU CPU Mem info
yhu422 Feb 16, 2024
5cf652a
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Feb 27, 2024
d910b05
Added context constant
yhu422 Feb 27, 2024
8cf264b
Formatting & CPU Info
yhu422 Feb 27, 2024
93b8773
Update vllm/usage/usage_lib.py
yhu422 Feb 27, 2024
fe39b84
Added CPU info, new stat file path
yhu422 Feb 27, 2024
fc6e374
added gpu memory
yhu422 Feb 27, 2024
ab23171
added memory
yhu422 Feb 28, 2024
686c84a
Distinguish production/testing usage, added custom domain
yhu422 Mar 1, 2024
877eb78
formatting
yhu422 Mar 1, 2024
bc89a66
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Mar 1, 2024
36fd304
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Mar 5, 2024
e54f15b
test/prod distinction
yhu422 Mar 5, 2024
4e35b3b
Remove cpuinfo import
yhu422 Mar 5, 2024
a1597fb
ruff
yhu422 Mar 5, 2024
c580797
Merge branch 'main' of github.com:vllm-project/vllm into usage
yhu422 Mar 14, 2024
84353d4
fixed merge
yhu422 Mar 14, 2024
f2e69fc
Pass up model architecture info for GPUExecutor
yhu422 Mar 14, 2024
4e19967
formatting
yhu422 Mar 14, 2024
f327f3c
formatting
yhu422 Mar 14, 2024
d9c8a44
Get architecture directly from configs
yhu422 Mar 14, 2024
59f0f10
Merge branch 'main' of github.com:vllm-project/vllm into usage
simon-mo Mar 16, 2024
f34259a
edits round
simon-mo Mar 17, 2024
30df77c
ruff
simon-mo Mar 17, 2024
60b652b
Merge branch 'main' of github.com:vllm-project/vllm into usage
simon-mo Mar 28, 2024
be91bab
fix format
simon-mo Mar 28, 2024
4f04743
finish all code level functionality
simon-mo Mar 28, 2024
f4bf862
add wip doc
simon-mo Mar 28, 2024
6b968db
Merge branch 'main' of github.com:vllm-project/vllm into usage
simon-mo Mar 28, 2024
2006788
revert some fixes
simon-mo Mar 28, 2024
db715c8
more fixes
simon-mo Mar 28, 2024
2c1e557
finish doc, readability pass
simon-mo Mar 28, 2024
42e66b8
edit pass
simon-mo Mar 28, 2024
a4e5742
fix doc and isort
simon-mo Mar 28, 2024
9652830
Merge branch 'main' of github.com:vllm-project/vllm into usage
simon-mo Mar 29, 2024
ba63b44
bad merge
simon-mo Mar 29, 2024
58fb78d
add to amd req txt
simon-mo Mar 29, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .buildkite/test-template.j2
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@ steps:
nvidia.com/gpu: "{{ step.num_gpus or default_num_gpu }}"
{% endif %}
env:
- name: VLLM_USAGE_SOURCE
value: ci-test
- name: HF_TOKEN
valueFrom:
secretKeyRef:
Expand Down
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ ray >= 2.9
sentencepiece # Required for LLaMA tokenizer.
numpy
torch == 2.1.2
requests
psutil
transformers >= 4.38.0 # Required for Gemma.
xformers == 0.0.23.post1 # Required for CUDA 12.1.
fastapi
Expand Down
13 changes: 9 additions & 4 deletions vllm/engine/async_llm_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from vllm.logger import init_logger
from vllm.outputs import RequestOutput
from vllm.sampling_params import SamplingParams
from vllm.usage.usage_lib import UsageContext

logger = init_logger(__name__)
ENGINE_ITERATION_TIMEOUT_S = int(
Expand Down Expand Up @@ -666,9 +667,12 @@ async def get_model_config(self) -> ModelConfig:
return self.engine.get_model_config()

@classmethod
def from_engine_args(cls,
engine_args: AsyncEngineArgs,
start_engine_loop: bool = True) -> "AsyncLLMEngine":
def from_engine_args(
cls,
engine_args: AsyncEngineArgs,
start_engine_loop: bool = True,
usage_context: UsageContext = UsageContext.ENGINE_CONTEXT
) -> "AsyncLLMEngine":
"""Creates an async LLM engine from the engine arguments."""
# Create the engine configs.
engine_configs = engine_args.create_engine_configs()
Expand All @@ -684,7 +688,8 @@ def from_engine_args(cls,
log_requests=not engine_args.disable_log_requests,
log_stats=not engine_args.disable_log_stats,
max_log_len=engine_args.max_log_len,
start_engine_loop=start_engine_loop)
start_engine_loop=start_engine_loop,
usage_context=usage_context)
return engine

async def do_log_stats(self) -> None:
Expand Down
36 changes: 23 additions & 13 deletions vllm/engine/llm_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
TokenizerGroup)
from vllm.utils import (Counter, set_cuda_visible_devices, get_ip,
get_open_port, get_distributed_init_method)

from vllm.usage.usage_lib import UsageContext, is_usage_stats_enabled, usage_message
if ray:
from ray.util.scheduling_strategies import PlacementGroupSchedulingStrategy

Expand Down Expand Up @@ -72,19 +72,20 @@ class LLMEngine:
placement_group: Ray placement group for distributed execution.
Required for distributed execution.
log_stats: Whether to log statistics.
usage_context: Specified entry point, used for usage info collection
"""

def __init__(
self,
model_config: ModelConfig,
cache_config: CacheConfig,
parallel_config: ParallelConfig,
scheduler_config: SchedulerConfig,
device_config: DeviceConfig,
lora_config: Optional[LoRAConfig],
placement_group: Optional["PlacementGroup"],
log_stats: bool,
) -> None:
self,
model_config: ModelConfig,
cache_config: CacheConfig,
parallel_config: ParallelConfig,
scheduler_config: SchedulerConfig,
device_config: DeviceConfig,
lora_config: Optional[LoRAConfig],
placement_group: Optional["PlacementGroup"],
log_stats: bool,
usage_context: UsageContext = UsageContext.ENGINE_CONTEXT) -> None:
logger.info(
f"Initializing an LLM engine (v{vllm.__version__}) with config: "
f"model={model_config.model!r}, "
Expand Down Expand Up @@ -118,6 +119,10 @@ def __init__(
self._init_tokenizer()
self.seq_counter = Counter()

#If usage stat is enabled, collect relevant info.
if is_usage_stats_enabled():
usage_message.report_usage(model_config.model, usage_context)

# Create the parallel GPU workers.
if self.parallel_config.worker_use_ray:
# Disable Ray usage stats collection.
Expand Down Expand Up @@ -394,7 +399,11 @@ def _init_cache(self) -> None:
self._run_workers("warm_up_model")

@classmethod
def from_engine_args(cls, engine_args: EngineArgs) -> "LLMEngine":
def from_engine_args(
cls,
engine_args: EngineArgs,
usage_context: UsageContext = UsageContext.ENGINE_CONTEXT
) -> "LLMEngine":
"""Creates an LLM engine from the engine arguments."""
# Create the engine configs.
engine_configs = engine_args.create_engine_configs()
Expand All @@ -404,7 +413,8 @@ def from_engine_args(cls, engine_args: EngineArgs) -> "LLMEngine":
# Create the LLM engine.
engine = cls(*engine_configs,
placement_group,
log_stats=not engine_args.disable_log_stats)
log_stats=not engine_args.disable_log_stats,
usage_context=usage_context)
return engine

def encode_request(
Expand Down
5 changes: 3 additions & 2 deletions vllm/entrypoints/api_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from vllm.engine.async_llm_engine import AsyncLLMEngine
from vllm.sampling_params import SamplingParams
from vllm.utils import random_uuid
from vllm.usage.usage_lib import UsageContext

TIMEOUT_KEEP_ALIVE = 5 # seconds.
app = FastAPI()
Expand Down Expand Up @@ -87,9 +88,9 @@ async def stream_results() -> AsyncGenerator[bytes, None]:
help="FastAPI root_path when app is behind a path based routing proxy")
parser = AsyncEngineArgs.add_cli_args(parser)
args = parser.parse_args()

engine_args = AsyncEngineArgs.from_cli_args(args)
engine = AsyncLLMEngine.from_engine_args(engine_args)
engine = AsyncLLMEngine.from_engine_args(
engine_args, usage_context=UsageContext.API_SERVER)

app.root_path = args.root_path
uvicorn.run(app,
Expand Down
4 changes: 3 additions & 1 deletion vllm/entrypoints/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from vllm.outputs import RequestOutput
from vllm.sampling_params import SamplingParams
from vllm.utils import Counter
from vllm.usage.usage_lib import UsageContext


class LLM:
Expand Down Expand Up @@ -106,7 +107,8 @@ def __init__(
disable_custom_all_reduce=disable_custom_all_reduce,
**kwargs,
)
self.llm_engine = LLMEngine.from_engine_args(engine_args)
self.llm_engine = LLMEngine.from_engine_args(
engine_args, usage_context=UsageContext.LLM_CLASS)
self.request_counter = Counter()

def get_tokenizer(
Expand Down
5 changes: 3 additions & 2 deletions vllm/entrypoints/openai/api_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from vllm.entrypoints.openai.serving_chat import OpenAIServingChat
from vllm.entrypoints.openai.serving_completion import OpenAIServingCompletion
from vllm.entrypoints.openai.serving_engine import LoRA
from vllm.usage.usage_lib import UsageContext

TIMEOUT_KEEP_ALIVE = 5 # seconds

Expand Down Expand Up @@ -245,9 +246,9 @@ async def authentication(request: Request, call_next):
served_model = args.served_model_name
else:
served_model = args.model

engine_args = AsyncEngineArgs.from_cli_args(args)
engine = AsyncLLMEngine.from_engine_args(engine_args)
engine = AsyncLLMEngine.from_engine_args(
engine_args, usage_context=UsageContext.OPENAI_API_SERVER)
openai_serving_chat = OpenAIServingChat(engine, served_model,
args.response_role,
args.lora_modules,
Expand Down
Empty file added vllm/usage/__init__.py
Empty file.
139 changes: 139 additions & 0 deletions vllm/usage/usage_lib.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
import os
import torch
import json
import platform
import pkg_resources
import requests
import datetime
import psutil
from threading import Thread
from pathlib import Path
from typing import Optional
from enum import Enum

_xdg_config_home = os.getenv('XDG_CONFIG_HOME',
os.path.expanduser('~/.config'))
_vllm_internal_path = 'vllm/usage_stats.json'

_USAGE_STATS_FILE = os.path.join(
_xdg_config_home,
_vllm_internal_path) #File path to store usage data locally
_USAGE_STATS_ENABLED = None
_USAGE_STATS_SERVER = os.environ.get('VLLM_USAGE_STATS_SERVER',
'https://stats.vllm.ai')


def is_usage_stats_enabled():
"""Determine whether or not we can send usage stats to the server.
The logic is as follows:
- By default, it should be enabled.
- Two environment variables can disable it:
- DO_NOT_TRACK=1
- VLLM_NO_USAGE_STATS=1
- A file in the home directory can disable it if it exists:
- $HOME/.config/vllm/do_not_track
"""
global _USAGE_STATS_ENABLED
if _USAGE_STATS_ENABLED is None:
do_not_track = os.environ.get('DO_NOT_TRACK', '0') == '1'
no_usage_stats = os.environ.get('VLLM_NO_USAGE_STATS', '0') == '1'
do_not_track_file = os.path.exists(
os.path.expanduser('~/.config/vllm/do_not_track'))

_USAGE_STATS_ENABLED = not (do_not_track or no_usage_stats
or do_not_track_file)
return _USAGE_STATS_ENABLED


def _get_current_timestamp_ns() -> int:
return int(datetime.datetime.now(datetime.timezone.utc).timestamp() * 1e9)


def _detect_cloud_provider() -> str:
# Try detecting through vendor file
vendor_files = [
'/sys/class/dmi/id/product_version', '/sys/class/dmi/id/bios_vendor',
'/sys/class/dmi/id/product_name',
'/sys/class/dmi/id/chassis_asset_tag', '/sys/class/dmi/id/sys_vendor'
]
# Mapping of identifiable strings to cloud providers
cloud_identifiers = {
'amazon': "AWS",
'microsoft corporation': "AZURE",
'google': "GCP",
'oraclecloud': "OCI",
}

for vendor_file in vendor_files:
path = Path(vendor_file)
if path.is_file():
file_content = path.read_text().lower()
for identifier, provider in cloud_identifiers.items():
if identifier in file_content:
return provider
return "UNKNOWN"


class UsageContext(Enum):
UNKNOWN_CONTEXT = "UNKNOWN_CONTEXT"
LLM_CLASS = "LLM_CLASS"
API_SERVER = "API_SERVER"
OPENAI_API_SERVER = "OPENAI_API_SERVER"
ENGINE_CONTEXT = "ENGINE_CONTEXT"


class UsageMessage:

def __init__(self) -> None:
self.gpu_list: Optional[dict] = None
self.provider: Optional[str] = None
self.architecture: Optional[str] = None
self.platform: Optional[str] = None
self.model: Optional[str] = None
self.vllm_version: Optional[str] = None
self.context: Optional[str] = None
self.log_time: Optional[int] = None
#Logical CPU count
self.num_cpu: Optional[int] = None
self.cpu_type: Optional[str] = None
self.total_memory: Optional[int] = None
self.source: Optional[str] = None

def report_usage(self, model: str, context: UsageContext) -> None:
t = Thread(target=usage_message._report_usage, args=(model, context))
t.start()

def _report_usage(self, model: str, context: UsageContext) -> None:
self.context = context.value
self.gpu_list = []
for i in range(torch.cuda.device_count()):
device_property = torch.cuda.get_device_properties(i)
gpu_name = device_property.name
gpu_memory = device_property.total_memory
self.gpu_list.append({"name": gpu_name, "memory": gpu_memory})
self.provider = _detect_cloud_provider()
self.architecture = platform.machine()
self.platform = platform.platform()
self.vllm_version = pkg_resources.get_distribution("vllm").version
self.model = model
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it makes more sense to get the model architecture vs the model name?

E.g. its probably more useful to know its the Llama architecture with size X over the name of the string for tracking purposes. Otherwise youll have to do this on the backend + it may not be recoverable for local models

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+100

self.log_time = _get_current_timestamp_ns()
self.num_cpu = os.cpu_count()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to get the type of CPU as well, such as it's product name so you can be aware of what ISA extensions are available for performance

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1!

#Best effort reading processor name
self.cpu_type = platform.processor()
self.total_memory = psutil.virtual_memory().total
Copy link
Member

@ywang96 ywang96 Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a heads up - if the model server is deployed as a linux docker container, then most metrics from os and psutil will reflect the overall capacity of the machine instead of the actual resource share allocated to the container. This has been an issue for a long time - see python/cpython#80235

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah i know there's a way around this but a bit too complex. i'm just going to assume most folks who are running LLM prod will adopt a one-pod-per-vm approach

self.source = os.environ.get("VLLM_USAGE_SOURCE", "production")
self._write_to_file()
headers = {'Content-type': 'application/x-ndjson'}
payload = json.dumps(vars(self))
try:
requests.post(_USAGE_STATS_SERVER, data=payload, headers=headers)
except requests.exceptions.RequestException:
print("Usage Log Request Failed")

def _write_to_file(self):
os.makedirs(os.path.dirname(_USAGE_STATS_FILE), exist_ok=True)
with open(_USAGE_STATS_FILE, "w+") as outfile:
json.dump(vars(self), outfile)


usage_message = UsageMessage()
Loading