A Model Context Protocol server for Azure AI Foundry, providing a unified set of tools for models, knowledge, evaluation, and more.
Category | Tool | Description |
---|---|---|
Explore | list_models_from_model_catalog |
Retrieves a list of supported models from the Azure AI Foundry catalog. |
list_azure_ai_foundry_labs_projects |
Retrieves a list of state-of-the-art AI models from Microsoft Research available in Azure AI Foundry Labs. | |
get_model_details_and_code_samples |
Retrieves detailed information for a specific model from the Azure AI Foundry catalog. | |
Build | get_prototyping_instructions_for_github_and_labs |
Provides comprehensive instructions and setup guidance for starting to work with models from Azure AI Foundry and Azure AI Foundry Labs. |
Deploy | get_model_quotas |
Get model quotas for a specific Azure location. |
create_azure_ai_services_account |
Creates an Azure AI Services account. | |
list_deployments_from_azure_ai_services |
Retrieves a list of deployments from Azure AI Services. | |
deploy_model_on_ai_services |
Deploys a model on Azure AI Services. | |
create_foundry_project |
Creates a new Azure AI Foundry project. |
Category | Tool | Description |
---|---|---|
Index | list_index_names |
Retrieve all names of indexes from the AI Search Service |
list_index_schemas |
Retrieve all index schemas from the AI Search Service | |
retrieve_index_schema |
Retrieve the schema for a specific index from the AI Search Service | |
create_index |
Creates a new index | |
modify_index |
Modifies the index definition of an existing index | |
delete_index |
Removes an existing index | |
Document | add_document |
Adds a document to the index |
delete_document |
Removes a document from the index | |
Query | query_index |
Searches a specific index to retrieve matching documents |
get_document_count |
Returns the total number of documents in the index | |
Indexer | list_indexers |
Retrieve all names of indexers from the AI Search Service |
get_indexer |
Retrieve the full definition of a specific indexer from the AI Search Service | |
create_indexer |
Create a new indexer in the Search Service with the skill, index and data source | |
delete_indexer |
Delete an indexer from the AI Search Service by name | |
Data Source | list_data_sources |
Retrieve all names of data sources from the AI Search Service |
get_data_source |
Retrieve the full definition of a specific data source | |
Skill Set | list_skill_sets |
Retrieve all names of skill sets from the AI Search Service |
get_skill_set |
Retrieve the full definition of a specific skill set | |
Content | fk_fetch_local_file_contents |
Retrieves the contents of a local file path (sample JSON, document etc) |
fk_fetch_url_contents |
Retrieves the contents of a URL (sample JSON, document etc) |
Category | Tool | Description |
---|---|---|
Evaluator Utilities | list_text_evaluators |
List all available text evaluators. |
list_agent_evaluators |
List all available agent evaluators. | |
get_text_evaluator_requirements |
Show input requirements for each text evaluator. | |
get_agent_evaluator_requirements |
Show input requirements for each agent evaluator. | |
Text Evaluation | run_text_eval |
Run one or multiple text evaluators on a JSONL file or content. |
format_evaluation_report |
Convert evaluation output into a readable Markdown report. | |
Agent Evaluation | agent_query_and_evaluate |
Query an agent and evaluate its response using selected evaluators. End-to-End agent evaluation. |
run_agent_eval |
Evaluate a single agent interaction with specific data (query, response, tool calls, definitions). | |
Agent Service | list_agents |
List all Azure AI Agents available in the configured project. |
connect_agent |
Send a query to a specified agent. | |
query_default_agent |
Query the default agent defined in environment variables. |
- How can you help me find the right model?
- What models can I use from Azure AI Foundry?
- What OpenAI models are available in Azure AI Foundry?
- What are the most popular models in Azure AI Foundry? Pick me 10 models.
- What models are good for reasoning? Show me some examples in two buckets, one for large models and one for small models.
- Can you compare Phi models and explain differences?
- Show me the model card for Phi-4-reasoning.
- Can you show me how to test a model?
- What does free playground in Azure AI Foundry mean?
- Can I use GitHub token to test models?
- Show me latest models that support GitHub token.
- Who are the model publishers for the models in Azure AI Foundry?
- Show me models from Meta.
- Show me models with MIT license.
- Can you describe how you can help me build a prototype using the model?
- Describe how you can build a prototype that uses an OpenAI model with my GitHub token. Don't try to create one yet.
- Recommend me a few scenarios to build prototypes with models.
- Tell me about Azure AI Foundry Labs.
- Tell me more about Magentic One
- What is Omniparser and what are potential use cases?
- Can you help me build a prototype using Omniparser?
- Can you help me deploy OpenAI models?
- What steps do I need to take to deploy OpenAI models on Azure AI Foundry?
- Can you help me understand how I can use OpenAI models on Azure AI Foundry using GitHub token? Can I use it for production?
- I already have an Azure AI services resource. Can I deploy OpenAI models on it?
- What does quota for OpenAI models mean on Azure AI Foundry?
- Get me current quota for my AI services resource.
This GitHub template has minimal setup with MCP server configuration and all required dependencies, making it easy to get started with your own projects.
This helps you automatically set up the MCP server in your VS Code environment under user settings. You will need
uvx
installed in your environment to run the server.
-
Install
uv
by following Installing uv. -
Start a new workspace in VS Code.
-
(Optional) Create
.env
file in the root of your workspace to set environment variables. -
Create
.vscode/mcp.json
in the root of your workspace.{ "servers": { "mcp_foundry_server": { "type": "stdio", "command": "uvx", "args": [ "--prerelease=allow", "--from", "git+https://github.com/azure-ai-foundry/mcp-foundry.git", "run-azure-ai-foundry-mcp", "--envFile", "${workspaceFolder}/.env" ] } } }
-
Click
Start
button for the server in.vscode/mcp.json
file. -
Open GitHub Copilot chat in Agent mode and start asking questions.
See More examples for advanced setup for more details on how to set up the MCP server.
To securely pass information to the MCP server, such as API keys, endpoints, and other sensitive data, you can use environment variables. This is especially important for tools that require authentication or access to external services.
You can set these environment variables in a .env
file in the root of your project. You can pass the location of .env
file when setting up MCP Server, and the server will automatically load these variables when it starts.
See example .env file for a sample configuration.
Category | Variable | Required? | Description |
---|---|---|---|
Model | GITHUB_TOKEN |
No | GitHub token for testing models for free with rate limits. |
Knowledge | AZURE_AI_SEARCH_ENDPOINT |
Always | The endpoint URL for your Azure AI Search service. It should look like this: https://<your-search-service-name>.search.windows.net/ . |
AZURE_AI_SEARCH_API_VERSION |
No | API Version to use. Defaults to 2025-03-01-preview . |
|
SEARCH_AUTHENTICATION_METHOD |
Always | service-principal or api-search-key . |
|
AZURE_TENANT_ID |
Yes when using service-principal |
The ID of your Azure Active Directory tenant. | |
AZURE_CLIENT_ID |
Yes when using service-principal |
The ID of your Service Principal (app registration) | |
AZURE_CLIENT_SECRET |
Yes when using service-principal |
The secret credential for the Service Principal. | |
AZURE_AI_SEARCH_API_KEY |
Yes when using api-search-key |
The API key for your Azure AI Search service. | |
Evaluation | EVAL_DATA_DIR |
Always | Path to the JSONL evaluation dataset |
AZURE_SUBSCRIPTION_ID |
Agent eval or risk/safety evals | Azure subscription ID | |
AZURE_RESOURCE_GROUP |
Agent eval or risk/safety evals | Resource group of your Azure AI project | |
AZURE_PROJECT_NAME |
Agent eval or risk/safety evals | Azure AI project name | |
AZURE_OPENAI_ENDPOINT |
Text quality evaluators | Endpoint for Azure OpenAI | |
AZURE_OPENAI_API_KEY |
Text quality evaluators | API key for Azure OpenAI | |
AZURE_OPENAI_DEPLOYMENT |
Text quality evaluators | Deployment name (e.g., gpt-4o ) |
|
AZURE_OPENAI_API_VERSION |
Text quality evaluators | Version of the OpenAI API | |
PROJECT_CONNECTION_STRING |
Agent services | Used for Azure AI Agent querying and evaluation |
Note
Model
GITHUB_TOKEN
is used to authenticate with GitHub API for testing models. It is not required if you are exploring models from Foundry catalog.
Knowledge
- See Create a search service to learn more about provisioning a search service.
- Azure AI Search supports multiple authentication methods. You can use either a Microsoft Entra authentication or an Key-based authentication to authenticate your requests. The choice of authentication method depends on your security requirements and the Azure environment you are working in.
- See Authenication to learn more about authentication methods for a search service.
Evaluation
- If you're using agent tools or safety evaluators, make sure the Azure project credentials are valid.
- If you're only doing text quality evaluation, the OpenAI endpoint and key are sufficient.
MIT License. See LICENSE for details.