Created with ❤️ by Navuluri Balaji
A living collection of AI recipes, fine-tuning hacks, architectural deep dives, and agentic frameworks — distilled into one place for builders, researchers, and AI enthusiasts.
AI Recipe Genius is not just a repository — it’s a cookbook for AI builders.
Think of it as your culinary lab for AI 🍲 where each recipe is a practical guide:
- 🧑🍳 Fine-tuning large & small models (SLMs, LLMs)
- 🧠 Understanding LLM architectures (Transformers, MoE, CoT, etc.)
- 🤖 Building custom Agentic frameworks and real-world use cases
- 🎛️ Exploring advanced training concepts (RL, RLHF, RHLHF, etc.)
- ☁️ Deploying models seamlessly (Cloud, Docker, Ollama, Edge devices)
Whether you’re just starting out or pushing the boundaries of AI research, this repo is your go-to kitchen for AI experiments, deployment recipes, and reusable patterns.
- Models: LLMs, SLMs, MoE architectures, fine-tuned transformers
- Frameworks: PyTorch, TensorFlow, Hugging Face, LangChain, LangGraph, CrewAI, Autogen, ADK, nbagents
- Backend: FastAPI, Flask
- Frontend: Streamlit, React (for interactive demos)
- Libraries:
transformers
,torch
,tensorflow
,numpy
,pandas
,scikit-learn
,requests
- Containerization: Docker, Kubernetes
- Deployment: Cloud (GCP, AWS, Azure), Ollama, Local GPUs/TPUs
This repo is structured like a cookbook — with each section containing recipes, tutorials, and code snippets:
- Fine-tuning LLMs & SLMs
- Instruction tuning & adapters (LoRA, PEFT, QLoRA)
- MoE (Mixture of Experts) deep dives
- Chain of Thought (CoT), ReAct, and reasoning strategies
- RL, RLHF, RHLHF explained with hands-on code
- Building custom AI agents from scratch
- Using frameworks (LangChain, CrewAI, Autogen, ADK, LangGraph, nbagents)
- Multi-agent collaboration patterns
- Real-world agentic use cases (customer support, research agents, workflow automation, etc.)
- Containerizing AI apps with Docker
- Running on Cloud (GCP, AWS, Azure)
- Lightweight local deployments with Ollama
- Scaling inference & serving with FastAPI
- Under-the-hood of Transformers & LLMs
- Memory, Attention, and Scaling Laws
- Efficient fine-tuning for low-resource environments
- Trade-offs between SLMs & LLMs
- Fork the repo 🍴
- Create a feature branch 🌱
- Submit a PR 🚀