Skip to content
View KuntaiDu's full-sized avatar

Block or report KuntaiDu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
KuntaiDu/README.md

Hi, I'm Kuntai Du ๐Ÿ‘‹

PhD @ UChicago, now working as Chief Scientist in TensorMesh, Inc.

Expertise: LLM inference, especially KV-cache-related optimizations.

Check my home page for more about me!

๐Ÿ”ง Experiences

  • ๐Ÿš€ Working on vLLM project(GitHub Stars) as vLLM core team member and committer.
  • ๐Ÿ’พ Contributing to the LMCache project(GitHub Stars), exploring fun ideas in KV caches.

๐ŸŽฎ Hobbies and Interests

  • ๐ŸŽฎ Gaming: League of Legends, Stardew Valley, Go
  • ๐Ÿ’ƒ Street Dance: Locking main, but I also dance waacking.
  • ๐ŸŽค Singing: Loch Lomond and ไผ ๅฅ‡ Legend

๐Ÿ“ง Contact

Pinned Loading

  1. vllm-project/vllm vllm-project/vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 60.9k 10.8k

  2. vllm-project/production-stack vllm-project/production-stack Public

    vLLMโ€™s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    Python 1.9k 309

  3. LMCache/LMCache LMCache/LMCache Public

    Supercharge Your LLM with the Fastest KV Cache Layer

    Python 5.7k 662