Stars
- All languages
- Assembly
- BASIC
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- Common Lisp
- Component Pascal
- Crystal
- Cuda
- Cython
- D
- Dart
- Dockerfile
- Emacs Lisp
- Euphoria
- F#
- Fortran
- FreeBASIC
- GAMS
- GDScript
- Go
- HTML
- Haskell
- Inform 7
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- M
- MDX
- Makefile
- Markdown
- Metal
- Mojo
- Nim
- Objective-C
- Odin
- PHP
- Pascal
- PowerShell
- PureBasic
- Python
- R
- Red
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Svelte
- Swift
- Text
- TypeScript
- V
- VBA
- Verilog
- Visual Basic
- Visual Basic .NET
- Visual Basic 6.0
- Vue
- XML
- Xojo
- Zig
- eC
- xBase
A tool to determine whether or not your PC can run a given LLM
Replace 'hub' with 'ingest' in any github url to get a prompt-friendly extract of a codebase
Replace 'hub' with 'diagram' in any GitHub url to instantly visualize the codebase as an interactive diagram
Generative models for conditional audio generation
Make any LLM to think like OpenAI o1 and deepseek R1
Vaak is a AI Enabled Dictation keyboard. In Punjabi Vaak Refers to Utterance or Speech.
A very simple, work-in-progress AOT compiler for x86-64 for learning purposes
EvaByte: Efficient Byte-level Language Models at Scale
Frontier Multimodal Foundation Models for Image and Video Understanding
A curated list of useful open-source AI resources
A GUI Agent application based on UI-TARS(Vision-Lanuage Model) that allows you to control your computer using natural language.
A AI- Powered Document organizer tool. It displays a small cute robot on the screen. Give it any file and a small description (optional), It will analyse the contents and description and save it on…
gpt_neox_client is a simple client for GPT-NeoX in Ruby
An LLM constructed with the C programming language.
🚀 LowMemoryLLM is A lightweight C-based LLM inference engine optimized for memory-constrained environments. ✨ Features: - 📊 Multiple quantization (INT8/4/2) - 💾 Smart memory management - 🔄 Efficien…
A scalable inference server for models optimized with OpenVINO™
High-speed and easy-use LLM serving framework for local deployment