- 👋 Hi, I’m @minosfuture
- 👀 I’m interested in LLM inference, linux, GPU, Rust, and system performance
- 🌱 I’m currently learning vLLM, model optimization, etc.
- 💞️ I’m looking to collaborate on OSS
- 📫 minos.future@gmail.com
- ⚡ Fun fact: my cat used to write code for me. I ask AI to debug it. AI says meow.
Pinned Loading
-
cuda-ground-up
cuda-ground-up PublicA hands-on repository for learning and experimenting with GPU programming, CUDA kernel optimization, and model optimization techniques, built from the ground up with a teaching-focused approach.
Cuda
-
vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.