Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
65d8ff3
update unit 0 material
burtenshaw May 7, 2025
aa6ae2a
first commit on unit1
burtenshaw May 7, 2025
8675503
first commit on unit 2
burtenshaw May 7, 2025
984ae11
delete empty pages in unit 3
burtenshaw May 7, 2025
75007ad
update toc with changes to units
burtenshaw May 7, 2025
3fda407
add images to unit 0
burtenshaw May 8, 2025
bb39b15
review unit 1
burtenshaw May 8, 2025
7f21179
update toc
burtenshaw May 8, 2025
7ffb03e
simplify mcp client example down
burtenshaw May 8, 2025
f23ddcd
remove word count example from unit 1
burtenshaw May 8, 2025
f5f7a9d
add example of client implementation in gradio
burtenshaw May 9, 2025
d2af3b2
embed gradio client app
burtenshaw May 9, 2025
1c12a1a
update toc with gradio client
burtenshaw May 9, 2025
dd0474c
update toc
burtenshaw May 9, 2025
6a9e17d
fix all image links to use dataset repo
burtenshaw May 9, 2025
11d419b
Merge branch 'main' into first-release-unit-1-2
burtenshaw May 12, 2025
05299e0
add youtube video on sdk in python
burtenshaw May 13, 2025
4ae7517
add a use case for a code editing agent
burtenshaw May 13, 2025
849381f
explain use of sse
burtenshaw May 13, 2025
36c1ea9
add outputs to mcp server examples
burtenshaw May 13, 2025
3841c27
embed new spaces video
burtenshaw May 13, 2025
8e25a3a
add and edit tiny agents blog post
burtenshaw May 13, 2025
afab954
improve relationship between tiny agents and gradio use case
burtenshaw May 13, 2025
5c3116a
reduce focus on claud desktop
burtenshaw May 13, 2025
76b6c77
add tiny agents to toc
burtenshaw May 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 15 additions & 25 deletions units/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,40 +15,30 @@
title: The Communication Protocol
- local: unit1/capabilities
title: Understanding MCP Capabilities
- local: unit1/sdk
title: MCP SDK
- local: unit1/mcp-clients
title: MCP Clients
- local: unit1/gradio-mcp
title: Gradio MCP Integration

- title: "2. Use Case: Building with MCP"
- title: "2. Use Case: End-to-End MCP Application"
sections:
- local: unit2/introduction
title: Introduction
- local: unit2/environment-setup
title: Setting Up Your Development Environment & SDKs
- local: unit2/building-server
title: Building Your First MCP Server
- local: unit2/server-capabilities
title: Implementing Server Capabilities
- local: unit2/developing-clients
title: Developing MCP Clients
- local: unit2/configuration
title: Configuration, Authentication, and Debugging
- local: unit2/hub-mcp-servers
title: MCP Servers on Hugging Face Hub
title: Introduction to Building an MCP Application
- local: unit2/gradio-server
title: Building the Gradio MCP Server
- local: unit2/clients
title: Using MCP Clients with your application
- local: unit2/gradio-client
title: Building an MCP Client with Gradio
- local: unit2/tiny-agents
title: Building a Tiny Agent with TypeScript

- title: "3. Use Case: Deploying with MCP"
- title: "3. Use Case: Advanced MCP Development"
sections:
- local: unit3/introduction
title: Introduction
- local: unit3/advanced-features
title: Exploring Advanced MCP Features
- local: unit3/security
title: Security Deep Dive - Threats and Mitigation Strategies
- local: unit3/limitations
title: Limitations, Challenges, and Comparisons
- local: unit3/huggingface-ecosystem
title: Hugging Face's Tiny Agents and MCP
- local: unit3/final-project
title: Final Project - Building a Complete MCP Application

- title: "Bonus Units"
sections:
Expand Down
139 changes: 139 additions & 0 deletions units/en/unit0/introduction.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# Welcome to the 🤗 Model Context Protocol (MCP) Course

![MCP Course thumbnail](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/1.png)

Welcome to the most exciting topic in AI today: **Model Context Protocol (MCP)**!

This free course will take you on a journey, **from beginner to informed**, in understanding, using, and building applications with MCP.

This first unit will help you onboard:

* Discover the **course's syllabus**.
* **Get more information about the certification process and the schedule**.
* Get to know the team behind the course.
* Create your **account**.
* **Sign-up to our Discord server**, and meet your classmates and us.

Let's get started!

## What to expect from this course?

In this course, you will:

* 📖 Study Model Context Protocol in **theory, design, and practice.**
* 🧑‍💻 Learn to **use established MCP SDKs and frameworks**.
* 💾 **Share your projects** and explore applications created by the community.
* 🏆 Participate in challenges where you will **evaluate your MCP implementations against other students'.**
* 🎓 **Earn a certificate of completion** by completing assignments.

And more!

At the end of this course, you'll understand **how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards**.

Don't forget to [**sign up to the course!**](https://huggingface.co/mcp-course)

## What does the course look like?

The course is composed of:

* _Foundational Units_: where you learn MCP **concepts in theory**.
* _Hands-on_: where you'll learn **to use established MCP SDKs** to build your applications. These hands-on sections will have pre-configured environments.
* _Use case assignments_: where you'll apply the concepts you've learned to solve a real-world problem that you'll choose.
* _Collaborations_: We're collaborating with Hugging Face's partners to give you the latest MCP implementations and tools.

This **course is a living project, evolving with your feedback and contributions!** Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.

After you have gone through the course, you can also send your feedback 👉 using this form [LINK TO FEEDBACK FORM]

## What's the syllabus?

Here is the **general syllabus for the course**. A more detailed list of topics will be released with each unit.

| Chapter | Topic | Description |
| ------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| 0 | Onboarding | Set you up with the tools and platforms that you will use. |
| 1 | MCP Fundamentals, Architecture and Core Concepts | Explain core concepts, architecture, and components of Model Context Protocol. Show a simple use case using MCP. |
| 2 | End-to-end Use case: MCP in Action | Build a simple end-to-end MCP application that you can share with the community. |
| 3 | Deployed Use case: MCP in Action | Build a deployed MCP application using a the Hugging Face ecosystem and partners' services. |
| 4 | Bonus Units | Bonus units to help you get more out of the course, working with partners' libraries and services. |

## What are the prerequisites?

To be able to follow this course, you should have:

* Basic understanding of AI and LLM concepts
* Familiarity with software development principles and API concepts
* Experience with at least one programming language (Python or TypeScript examples will be shown)

If you don't have any of these, don't worry! Here are some resources that can help you:

* [LLM Course](https://huggingface.co/learn/llm-course/en/chapter1/10) will guide you through the basics of using and building with LLMs.
* [Agents Course](https://huggingface.co/learn/agents-course/en/chapter1/10) will guide you through building AI agents with LLMs.

<Tip>

The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and agents, you can start the course now!

</Tip>

## What tools do I need?

You only need 2 things:

* _A computer_ with an internet connection.
* An _Account_: to access the course resources and create projects. If you don't have an account yet, you can create one [here](https://huggingface.co/join) (it's free).

## The Certification Process

You can choose to follow this course _in audit mode_, or do the activities and _get one of the two certificates we'll issue_. If you audit the course, you can participate in all the challenges and do assignments if you want, and **you don't need to notify us**.

The certification process is **completely free**:

* _To get a certification for fundamentals_: you need to complete Unit 1 of the course. This is intended for students that want to get up to date with the latest trends in MCP, without the need to build a full application.
* _To get a certificate of completion_: you need to complete the use case units (2 and 3). This is intended for students that want to build a full application and share it with the community.

## What is the recommended pace?

Each chapter in this course is designed **to be completed in 1 week, with approximately 3-4 hours of work per week**.

Since there's a deadline, we provide you a recommended pace:

![Recommended Pace](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/2.png)

## How to get the most out of the course?

To get the most out of the course, we have some advice:

1. Join study groups in Discord: studying in groups is always easier. To do that, you need to join our discord server and verify your account.
2. **Do the quizzes and assignments**: the best way to learn is through hands-on practice and self-assessment.
3. **Define a schedule to stay in sync**: you can use our recommended pace schedule below or create yours.

![Course advice](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit0/3.png)

## Who are we

About the authors:

### Ben Burtenshaw

Ben is a Machine Learning Engineer at Hugging Face who focuses building LLM applications, with post training and agentic approaches.

<!-- ## Acknowledgments -->

<!-- We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support: -->

<!-- TODO: @burtenshaw add contributors and partners -->

## I found a bug, or I want to improve the course

Contributions are **welcome** 🤗

* If you _found a bug 🐛 in a notebook_, please open an issue and **describe the problem**.
* If you _want to improve the course_, you can open a Pull Request.
* If you _want to add a full section or a new unit_, the best is to open an issue and **describe what content you want to add before starting to write it so that we can guide you**.

## I still have questions

Please ask your question in our discord server #mcp-course-questions.

Now that you have all the information, let's get on board ⛵
85 changes: 85 additions & 0 deletions units/en/unit1/architectural-components.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Architectural Components of MCP

In the previous section, we discussed the key concepts and terminology of MCP. Now, let's dive deeper into the architectural components that make up the MCP ecosystem.

## Host, Client, and Server

The Model Context Protocol (MCP) is built on a client-server architecture that enables structured communication between AI models and external systems.

![MCP Architecture](https://huggingface.co/datasets/mcp-course/images/resolve/main/unit1/4.png)

The MCP architecture consists of three primary components, each with well-defined roles and responsibilities: Host, Client, and Server. We touched on these in the previous section, but let's dive deeper into each component and their responsibilities.

### Host

The **Host** is the user-facing AI application that end-users interact with directly.

Examples include:
- AI Chat apps like OpenAI ChatGPT or Anthropic's Claude Desktop
- AI-enhanced IDEs like Cursor, or integrations to tools like Continue.dev
- Custom AI agents and applications built in libraries like LangChain or smolagents

The Host's responsibilities include:
- Managing user interactions and permissions
- Initiating connections to MCP Servers via MCP Clients
- Orchestrating the overall flow between user requests, LLM processing, and external tools
- Rendering results back to users in a coherent format

In most cases, users will select their host application based on their needs and preferences. For example, a developer may choose Cursor for its powerful code editing capabilities, while domain experts may use custom applications built in smolagents.

### Client

The **Client** is a component within the Host application that manages communication with a specific MCP Server. Key characteristics include:

- Each Client maintains a 1:1 connection with a single Server
- Handles the protocol-level details of MCP communication
- Acts as the intermediary between the Host's logic and the external Server

### Server

The **Server** is an external program or service that exposes capabilities to AI models via the MCP protocol. Servers:

- Provide access to specific external tools, data sources, or services
- Act as lightweight wrappers around existing functionality
- Can run locally (on the same machine as the Host) or remotely (over a network)
- Expose their capabilities in a standardized format that Clients can discover and use

## Communication Flow

Let's examine how these components interact in a typical MCP workflow:

<Tip>

In the next section, we'll dive deeper into the communication protocol that enables these components with practical examples.

</Tip>

1. **User Interaction**: The user interacts with the **Host** application, expressing an intent or query.

2. **Host Processing**: The **Host** processes the user's input, potentially using an LLM to understand the request and determine which external capabilities might be needed.

3. **Client Connection**: The **Host** directs its **Client** component to connect to the appropriate Server(s).

4. **Capability Discovery**: The **Client** queries the **Server** to discover what capabilities (Tools, Resources, Prompts) it offers.

5. **Capability Invocation**: Based on the user's needs or the LLM's determination, the Host instructs the **Client** to invoke specific capabilities from the **Server**.

6. **Server Execution**: The **Server** executes the requested functionality and returns results to the **Client**.

7. **Result Integration**: The **Client** relays these results back to the **Host**, which incorporates them into the context for the LLM or presents them directly to the user.

A key advantage of this architecture is its modularity. A single **Host** can connect to multiple **Servers** simultaneously via different **Clients**. New **Servers** can be added to the ecosystem without requiring changes to existing **Hosts**. Capabilities can be easily composed across different **Servers**.

<Tip>

As we discussed in the previous section, this modularity transforms the traditional M×N integration problem (M AI applications connecting to N tools/services) into a more manageable M+N problem, where each Host and Server needs to implement the MCP standard only once.

</Tip>

The architecture might appear simple, but its power lies in the standardization of the communication protocol and the clear separation of responsibilities between components. This design allows for a cohesive ecosystem where AI models can seamlessly connect with an ever-growing array of external tools and data sources.

## Conclusion

These interaction patterns are guided by several key principles that shape the design and evolution of MCP. The protocol emphasizes **standardization** by providing a universal protocol for AI connectivity, while maintaining **simplicity** by keeping the core protocol straightforward yet enabling advanced features. **Safety** is prioritized by requiring explicit user approval for sensitive operations, and discoverability enables dynamic discovery of capabilities. The protocol is built with **extensibility** in mind, supporting evolution through versioning and capability negotiation, and ensures **interoperability** across different implementations and environments.

In the next section, we'll explore the communication protocol that enables these components to work together effectively.
Loading