Skip to content

KarlKeat/PGxQA

Repository files navigation

PGx-LLM-Eval

Running a local llama via vLLM

# Starts an OAI-compatible API server on port 8000
python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct 

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •