Skip to content

nikbearbrown/INFO_7375_Prompt_Engineering_and_Generative_AI

Repository files navigation

INFO 7375 — Prompt Engineering for Generative AI

Full Course Outline (Book-Style Structure)

(Using the agreed template for this session)


Part I — Foundations of Generative AI and Prompt Design


Module 1 — Foundations of LLMs, Prompting, and AI Literacy

Core principles of large language models, generative AI, and structured prompting. Topics include:

  • What LLMs are and how they work at a high level
  • Temperature, randomness, and sampling strategies
  • How generative models interpret and transform instructions
  • Writing your first prompts
  • Understanding AI limitations and failure modes
  • Prompting as human–AI collaboration

Module 2 — The Art and Science of Prompt Engineering

A deep dive into the mechanics, patterns, and cognitive structures of effective prompting. Includes:

  • What is a prompt—and what isn’t a prompt
  • Persona pattern, audience pattern, question refinement pattern
  • Cognitive verifier pattern and self-checking loops
  • Few-shot prompting, contrastive prompting, chain-of-thought
  • Game-play and simulation-based prompting
  • Structural prompt design for clarity, safety, and reliability

Module 3 — Prompt Engineering Patterns, Frameworks, and Architectures

Advanced prompting systems that generalize across tasks. Covers:

  • The Prompt Stack (system → instruction → examples → constraints → output format)
  • ReAct prompting (Reason + Act)
  • Iterative refiners and prompt feedback loops
  • Pattern libraries and reusable prompt templates
  • Prompt compression, abstraction, and modularization

Part II — Integrating LLMs with Tools, Data, and Retrieval Systems


Module 4 — Vector Databases, Embeddings, and Semantic Retrieval

Building retrieval-augmented generation (RAG) systems. Topics:

  • Embeddings: semantic geometry in high-dimensional space
  • Vector databases and similarity search
  • Indexing strategies and chunking best practices
  • Creating powerful knowledge retrieval pipelines
  • RAG failure modes and hallucination reduction
  • Semantic search applications with hands-on labs

Module 5 — LangChain for LLM Applications

A practical introduction to LangChain for building real-world LLM-powered systems. Includes:

  • LangChain architecture and core components
  • Tools, Agents, Chains, and memory systems
  • Integrating LLMs with APIs, documents, and external data sources
  • Designing production-grade LLM apps
  • Case studies of LangChain in industry
  • Troubleshooting common pitfalls

Part III — Model Customization, Fine-Tuning, and Alignment


Module 6 — Fine-Tuning and Configuring Large Language Models

Hands-on and theoretical foundations for building task-specific models. Topics include:

  • Transformer architecture foundations
  • Pre-training: corpora, scaling laws, and compute constraints
  • Instruction fine-tuning (SFT)
  • Multi-task training and domain adaptation
  • Evaluation of fine-tuned models
  • Dataset design for fine-tuning
  • Tools: LoRA, QLoRA, PEFT, Hugging Face workflows

Module 7 — Reinforcement Learning, Alignment, and Human Feedback

Bringing models into alignment with human intentions. Covers:

  • Reinforcement Learning from Human Feedback (RLHF)
  • Reward modeling and preference optimization
  • Safety, ethics, and steerability
  • Value alignment in generative systems
  • Techniques for reducing harmful or biased outputs
  • Deploying aligned models in real-world settings

Part IV — Agentic AI, Architectures, and Autonomous Systems


Module 8 — Agentic AI Systems: Theory and Practice

Understanding autonomous systems powered by LLMs. Topics include:

  • What is an AI agent?
  • ReAct agents (reasoning and acting loops)
  • Reflexion and self-debugging agents
  • Function calling and tool orchestration
  • Planning and multi-step reasoning
  • Short-term and long-term memory systems
  • Multi-agent collaboration
  • Ethical and safety considerations in agentic AI

Part V — Visualization, Communication, and Creative Applications


Module 9 — Designing Prompts for Creativity, Media, and Interactive Systems

How prompt engineering powers creative industries. Includes:

  • Prompting for writing, code, games, and interactive fiction
  • Prompt-driven design workflows
  • Generating multimodal content (images, audio, video)
  • Prompt patterns for structure, tone, and aesthetics
  • Best practices for reproducible creative pipelines

Module 10 — Communicating AI Outputs in Academic & Industry Settings

Students learn:

  • How to communicate model behavior
  • How to explain outputs to non-technical audiences
  • Demonstrating reasoning and uncertainty
  • Documenting prompt design decisions
  • Presenting models, agents, and solutions effectively

Part VI — Research Projects, Evaluation, and Independent Inquiry


Module 11 — Evaluation, Benchmarks, and AI Quality Assurance

Building a rigorous evaluation mindset. Topics:

  • Model evaluation metrics
  • Prompt evaluation and A/B testing
  • Benchmarks for agentic AI
  • Error analysis and model debugging
  • Designing experiments for prompt research

Module 12 — Final Research Project (Capstone)

Students conduct an original research or engineering project in:

  • Prompt engineering
  • Agentic systems
  • LLM-powered applications
  • Reinforcement learning & alignment
  • RAG systems and vector databases
  • Evaluation frameworks
  • Creative AI systems

Deliverables include:

  • Proposal
  • Mid-semester progress report
  • Final demonstration
  • Written paper and GitHub submission

Course Objectives

By the end of this course, students will:

Prompt Engineering & Applied LLMs

  • Master prompt patterns across domains
  • Build robust prompt systems for creative and analytical tasks
  • Understand how LLMs interpret and transform instructions

Technical Mastery of Retrieval & Integration

  • Construct scalable RAG pipelines
  • Use vector databases, embeddings, and LangChain

Model Customization & Alignment

  • Fine-tune LLMs and evaluate their performance
  • Apply RLHF principles in practical settings

Agentic AI Development

  • Build multi-step reasoning agents
  • Integrate tools, memory, APIs, and planning systems

Research, Creativity & Communication

  • Produce high-quality written and computational work
  • Present AI research with clarity and rigor

Course Materials

Primary Texts

  • Prompt Engineering for Generative AI — Nik Bear Brown
  • How to Speak Bot: Prompt Patterns — Nik Bear Brown

Additional Materials

  • Research papers on LLMs, prompting, fine-tuning, RLHF
  • Industry reports on LLM applications
  • LangChain documentation
  • Vector DB & embeddings tutorials
  • Reinforcement Learning literature

Course Highlights

  • Hands-on focus: prompts, agents, fine-tuning, LangChain
  • Cutting-edge developments in generative AI and agentic systems
  • Emphasis on real-world applications and advanced engineering
  • A portfolio-ready final project demonstrating mastery

Instructors & Approvals

  • Strong Python background recommended
  • Independent research expected
  • No instructor approval required

About

INFO 7375 Prompt Engineering and Generative AI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published