(Using the agreed template for this session)
Core principles of large language models, generative AI, and structured prompting. Topics include:
- What LLMs are and how they work at a high level
- Temperature, randomness, and sampling strategies
- How generative models interpret and transform instructions
- Writing your first prompts
- Understanding AI limitations and failure modes
- Prompting as human–AI collaboration
A deep dive into the mechanics, patterns, and cognitive structures of effective prompting. Includes:
- What is a prompt—and what isn’t a prompt
- Persona pattern, audience pattern, question refinement pattern
- Cognitive verifier pattern and self-checking loops
- Few-shot prompting, contrastive prompting, chain-of-thought
- Game-play and simulation-based prompting
- Structural prompt design for clarity, safety, and reliability
Advanced prompting systems that generalize across tasks. Covers:
- The Prompt Stack (system → instruction → examples → constraints → output format)
- ReAct prompting (Reason + Act)
- Iterative refiners and prompt feedback loops
- Pattern libraries and reusable prompt templates
- Prompt compression, abstraction, and modularization
Building retrieval-augmented generation (RAG) systems. Topics:
- Embeddings: semantic geometry in high-dimensional space
- Vector databases and similarity search
- Indexing strategies and chunking best practices
- Creating powerful knowledge retrieval pipelines
- RAG failure modes and hallucination reduction
- Semantic search applications with hands-on labs
A practical introduction to LangChain for building real-world LLM-powered systems. Includes:
- LangChain architecture and core components
- Tools, Agents, Chains, and memory systems
- Integrating LLMs with APIs, documents, and external data sources
- Designing production-grade LLM apps
- Case studies of LangChain in industry
- Troubleshooting common pitfalls
Hands-on and theoretical foundations for building task-specific models. Topics include:
- Transformer architecture foundations
- Pre-training: corpora, scaling laws, and compute constraints
- Instruction fine-tuning (SFT)
- Multi-task training and domain adaptation
- Evaluation of fine-tuned models
- Dataset design for fine-tuning
- Tools: LoRA, QLoRA, PEFT, Hugging Face workflows
Bringing models into alignment with human intentions. Covers:
- Reinforcement Learning from Human Feedback (RLHF)
- Reward modeling and preference optimization
- Safety, ethics, and steerability
- Value alignment in generative systems
- Techniques for reducing harmful or biased outputs
- Deploying aligned models in real-world settings
Understanding autonomous systems powered by LLMs. Topics include:
- What is an AI agent?
- ReAct agents (reasoning and acting loops)
- Reflexion and self-debugging agents
- Function calling and tool orchestration
- Planning and multi-step reasoning
- Short-term and long-term memory systems
- Multi-agent collaboration
- Ethical and safety considerations in agentic AI
How prompt engineering powers creative industries. Includes:
- Prompting for writing, code, games, and interactive fiction
- Prompt-driven design workflows
- Generating multimodal content (images, audio, video)
- Prompt patterns for structure, tone, and aesthetics
- Best practices for reproducible creative pipelines
Students learn:
- How to communicate model behavior
- How to explain outputs to non-technical audiences
- Demonstrating reasoning and uncertainty
- Documenting prompt design decisions
- Presenting models, agents, and solutions effectively
Building a rigorous evaluation mindset. Topics:
- Model evaluation metrics
- Prompt evaluation and A/B testing
- Benchmarks for agentic AI
- Error analysis and model debugging
- Designing experiments for prompt research
Students conduct an original research or engineering project in:
- Prompt engineering
- Agentic systems
- LLM-powered applications
- Reinforcement learning & alignment
- RAG systems and vector databases
- Evaluation frameworks
- Creative AI systems
Deliverables include:
- Proposal
- Mid-semester progress report
- Final demonstration
- Written paper and GitHub submission
By the end of this course, students will:
- Master prompt patterns across domains
- Build robust prompt systems for creative and analytical tasks
- Understand how LLMs interpret and transform instructions
- Construct scalable RAG pipelines
- Use vector databases, embeddings, and LangChain
- Fine-tune LLMs and evaluate their performance
- Apply RLHF principles in practical settings
- Build multi-step reasoning agents
- Integrate tools, memory, APIs, and planning systems
- Produce high-quality written and computational work
- Present AI research with clarity and rigor
- Prompt Engineering for Generative AI — Nik Bear Brown
- How to Speak Bot: Prompt Patterns — Nik Bear Brown
- Research papers on LLMs, prompting, fine-tuning, RLHF
- Industry reports on LLM applications
- LangChain documentation
- Vector DB & embeddings tutorials
- Reinforcement Learning literature
- Hands-on focus: prompts, agents, fine-tuning, LangChain
- Cutting-edge developments in generative AI and agentic systems
- Emphasis on real-world applications and advanced engineering
- A portfolio-ready final project demonstrating mastery
- Strong Python background recommended
- Independent research expected
- No instructor approval required