This hands-on 1-hour workshop introduces participants to building intelligent AI applications using Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). Attendees will progress from basic LLM API calls to creating a fully functional RAG-based chatbot that can answer questions using custom documents. The workshop emphasizes practical coding with immediate results, using popular tools like OpenAI's API, ChromaDB, and Gradio for interactive interfaces.
Target Audience: Developers and technical professionals with basic Python knowledge who want to understand and build agentic AI applications.
Key Takeaways:
- Understand core concepts of agentic AI and prompt engineering
- Make API calls to LLMs for various tasks (text, image, audio generation)
- Learn how embeddings and vector databases enable semantic search
- Build a complete RAG-based chatbot from scratch
Slides covering:
- What is Agentic AI?
- Basics of Prompt Engineering
- LLM Workflows vs Multi-Agent Systems
- Understanding Embeddings and Vector Databases
- Introduction to Retrieval-Augmented Generation (RAG)
- Basic API call to an LLM - text in, text out
- Image generation example - text to image
- Sound generation example - text to audio
- Create a Gradio UI for interactive testing
- Implement streaming output for real-time responses
- Build a multi-turn chat conversation
Technologies: OpenAI Python SDK, Gradio
- Generate embeddings for different documents
- Calculate cosine similarity between documents
- Find the most similar documents to a query (without vector DB yet)
Technologies: OpenAI Embeddings API
- Ingest documents into ChromaDB (using 5 Ready Tensor publications)
- Implement semantic document retrieval
- Build a complete RAG pipeline
- Implement in a Gradio Chat UI with RAG
Technologies: ChromaDB, OpenAI API, Gradio
Slides covering:
- Summary of what we covered
- Limitations and cautions of current approaches
- Next steps and advanced topics to explore
- Resources for continued learning
- LLM Provider: OpenAI (GPT-4/GPT-3.5)
- Embeddings: OpenAI
text-embedding-3-small - Vector Database: ChromaDB
- UI Framework: Gradio
- Development Environment: Jupyter Notebooks
- Language: Python 3.11+
Attendees should:
- Have Python 3.11 or higher installed
- Obtain an OpenAI API key
- Install required packages:
openai,chromadb,gradio,numpy - Have basic familiarity with Python and Jupyter notebooks
Follow these steps to get your environment ready for the workshop.
git clone <your-repo-url>
cd <your-repo-folder>It’s best to isolate workshop dependencies. You can use either venv or conda.
Using venv (Python 3.9+ recommended):
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateUsing conda:
conda create -n agentic_ai python=3.11 -y
conda activate agentic_aipip install -r requirements.txtYou need an API key with access to GPT-4o models. Create a .env file in the project root:
Create .env file:
# Create .env file in the project root
touch .envAdd your API key to the .env file:
OPENAI_API_KEY=your_api_key_hereImportant: Never commit the .env file to version control. It's already included in .gitignore.
jupyter notebookOpen one of the notebooks (part1_basic_llm.ipynb, part2_embeddings_cosine.ipynb, or part3_rag_chromadb.ipynb) to get started.
⚡ Tip: If you run into dependency conflicts, you can upgrade packages individually, e.g.:
pip install --upgrade openai gradio chromadb