Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Ignore node_modules
node_modules
teamsTab/node_modules

# Ignore build artifacts
dist
teamsTab/dist

# Ignore logs
*.log
32 changes: 32 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#
TOPCODER_API_BASE_URL="https://api.topcoder.com/v5"
AUTH0_M2M_TOKEN_URL="https://auth0proxy.topcoder-dev.com/token" # "https://topcoder-dev.auth0.com/oauth/token"
AUTH0_M2M_AUDIENCE="https://m2m.topcoder-dev.com/"
AUTH0_CLIENT_ID=""

# Config for LLM Agent for MS Teams tab app
# Azure AD SSO Configuration (from your App Registration in Entra ID)
AZURE_AD_AUDIENCE="api://diamondlike-crosstied-yuette.ngrok-free.app/82d17b02-2d34-4594-b243-09c516aad2e8"
AZURE_AD_TENANT_ID="9c8f4932-d163-42f7-aed6-b0117730a6e6"
IS_SAME_AZURE_AD_TENANT=false # true to check loggedIn user tenant should be same
MOCK_AZURE_AD_VALIDATION=false # true to use mock token instead of Azure AD

AWS_ACCESS_KEY_ID=""
AWS_SECRET_ACCESS_KEY=""
AWS_BEDROCK_REGION="us-east-1"
AWS_BEDROCK_MODEL_ID="anthropic.claude-3-5-sonnet-20240620-v1:0";

MONGO_DB_URL="mongodb://localhost:27017/teams-ai-agent"


# *** *********************************** *** #
# *** Frontend /teamsTab VITE Environment *** #
# Backend api url for frontend
VITE_API_BASE_URL="https://api.topcoder.com/v6/mcp/agent"

# For local development in web browser, use this variable to bypass MS Teams related code like authentication, theme etc.
VITE_IS_NOT_TEAMS_TAB=true

# Use this variable to bypass Azure AD SSO and use a mock token.
# NOTE: In production, use Azure AD to authenticate requests.
VITE_MOCK_VALIDATE_TOKEN=true
6 changes: 6 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,16 @@ FROM node:22.13.1-alpine
RUN apk add --no-cache bash
RUN apk update

# Declare ARGs to receive the variables from build command.
ARG VITE_API_BASE_URL

ENV VITE_API_BASE_URL=$VITE_API_BASE_URL

WORKDIR /app
COPY . .
RUN npm install pnpm -g
RUN pnpm install
RUN pnpm run build:frontend
RUN pnpm run build
RUN chmod +x appStartUp.sh
CMD ./appStartUp.sh
244 changes: 189 additions & 55 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,56 +1,190 @@
# Topcoder Model Context Protocol (MCP) Server

## Authentication Based Access via Guards

Tools/Resources/Prompts support authentication via TC JWT and/or M2M JWT. Providing JWT in the requests to the MCP server will result in specific listings and bahavior based on JWT access level/roles/permissions.

#### Using `authGuard` - requires TC jwt presence for access

```ts
@Tool({
name: 'query-tc-challenges-private',
description:
'Returns a list of Topcoder challenges based on the query parameters.',
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
annotations: {
title: 'Query Public Topcoder Challenges',
readOnlyHint: true,
},
canActivate: authGuard,
})
```

#### Using `checkHasUserRole(Role.Admin)` - TC Role based guard

```ts
@Tool({
name: 'query-tc-challenges-protected',
description:
'Returns a list of Topcoder challenges based on the query parameters.',
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
annotations: {
title: 'Query Public Topcoder Challenges',
readOnlyHint: true,
},
canActivate: checkHasUserRole(Role.Admin),
})
```

#### Using `canActivate: checkM2MScope(M2mScope.QueryPublicChallenges)` - M2M based access via scopes

```ts
@Tool({
name: 'query-tc-challenges-m2m',
description:
'Returns a list of Topcoder challenges based on the query parameters.',
parameters: QUERY_CHALLENGES_TOOL_PARAMETERS,
outputSchema: QUERY_CHALLENGES_TOOL_OUTPUT_SCHEMA,
annotations: {
title: 'Query Public Topcoder Challenges',
readOnlyHint: true,
},
canActivate: checkM2MScope(M2mScope.QueryPublicChallenges),
})
# Microsoft Teams AI Agent

A production-grade Microsoft Teams Tab app featuring a conversational AI agent powered by **LangChain.js** and **AWS Bedrock**.
The agent integrates with mcp tools to answer real-time queries beyond its base model knowledge.

---

## ✨ Key Features

* **Conversational AI:** Powered by LangChain.js + AWS Bedrock (Claude 3.5 Sonnet).
* **External Tools:** Fetch live contextual data through external integrations.
* **Real-time Chat Streaming:** Uses SSE for continuous agent thought updates.
* **Conversation History:** Stored and grouped in MongoDB for persistence.
* **Azure AD SSO:** Secure Teams authentication for verified access.
* **Fluent UI + Teams SDK:** Seamless user experience inside Teams.
* **Flexible Deployment:** Manual dev setup + production-ready Docker build.

---

## 🧩 Technology Stack

| Frontend (Vite) | Backend (Node.js + Express) |
| --------------------------- | ----------------------------------------- |
| ✅ React + TypeScript (Vite) | ✅ LangChain.js + AWS Bedrock (Claude 3.5) |
| ✅ Fluent UI + Teams SDK | ✅ MongoDB (Mongoose ODM) |
| ✅ Vite environment support | ✅ MCP Gateway for tool access |

| AI & Security | Infrastructure |
| --------------------------------- | --------------------------------- |
| ✅ AWS Bedrock (Claude 3.5 Sonnet) | ✅ Docker-based production build |
| ✅ Azure Active Directory (SSO) | ✅ Environment-based configuration |
| ✅ JWT Validation + JWKS-RSA | ✅ ngrok for local Teams testing |

---

## 🧠 Local Development Setup (Manual)

You can run the frontend and backend separately for local testing.
This is the preferred approach during active development.

## Prerequisites

* **Node.js:** v22.x
* **MongoDB:** Local instance or MongoDB Atlas
* **ngrok:** To expose your servers for Teams testing
* **[Azure AD App Registration: For SSO](./docs/AzureConfig.md)**
* **AWS Bedrock access**

### Step 1: Clone Repository

```bash
git clone https://github.com/topcoder-platform/tc-mcp.git
cd tc-mcp
```

---

### Step 2: Backend Setup

```bash
pnpm install
cp .env.example .env
# Edit .env with MongoDB, Azure, AWS credentials, Frontend env too
pnpm start:dev
```

Backend will run at `http://localhost:3000/v6/mcp/*`.

---

### Step 3: Frontend Setup

```bash
cd teamsTab
pnpm install
pnpm run dev
```

Frontend will run at `http://localhost:5173/teamsTab`.

---

### Step 4: Expose Local Servers for Teams

To test inside Teams, both servers must be public. It's best to get static url from ngrok for frontend, So we can setup Azure AD, MS Teams app with this static url once, and also prefer to take static url for backend too.

```bash
# For frontend
ngrok http --url=your-ngrok-static-url-frontend.app 5173
# For backend
ngrok http 3000
```

You’ll get two public URLs:

* Frontend → `https://your-ngrok-static-url-frontend.app`
* Backend → `https://<backend-id>.ngrok-free.app`

> **Note:**
> * ngrok frontend url should be added to `teamsTab\vite.config.ts` allowed hosts for development environment.
> * ngrok backend url should be added to `.env` for `VITE_API_BASE_URL` development
- Example: `VITE_API_BASE_URL=https://<backend-id>.ngrok-free.app/v6/mcp/agent`

---

### Step 5: Configure Teams Manifest

Edit:

```
teamsTab/appPackageDev/manifest.json
```

### Read: [MsTeamsConfig.md](./MsTeamsConfig.md)


---

### Step 6: Sideload App in Teams

1. Zip the following from `teamsTab/appPackageDev/`:

* `manifest.json`
* `color.png`
* `outline.png`
2. Go to **Microsoft Teams → Apps → Upload a custom app** and upload the zip.

You can now test the full AI agent directly in the Teams client.

---

# 🐳 Production / Deployment (Dockerized)

When you’re ready to deploy (e.g., on **Railway**, **Render**, or **AWS ECS**), use the provided `Dockerfile`.

### Dockerfile Overview

* Installs dependencies
* Builds the frontend (`teamsTab/`)
* Builds the backend
* Starts the server using `appStartUp.sh`
* Frontend & Backend Agent will be available in single url
- http://localhost:3000/teamsTab - Frontend for MS Teams app
- http://localhost:3000/v6/mcp/agent - Backend Agent for frontend
- http://localhost:3000/v6/mcp/* - Other Backend endpoints - `/mcp`, `/sse` etc


### Build and Run

```bash
docker build -t teams-ai-agent .
docker run -d -p 3000:3000 teams-ai-agent
```

> **Note:**
> * Configure environment variables **directly in your hosting platform’s dashboard**, such as **Railway**, **AWS ECS / Lightsail**, or **Render** — no `.env` file needed.
> * Most CI/CD platforms automatically include environment variables for required build arguments when running the Docker build.
>
> - For example, the build command would be like:
> `docker build --build-arg VITE_API_BASE_URL="https://api.topcoder.com/v6/mcp/agent" -t teams-ai-agent .`
>
> *
> **💡 Note:**
> * Local docker build will use root .env since it is not added to `.dockerignore`,
> * So no need to pass VITE_API_BASE_URL as Arg at `docker build -t teams-ai-agent .`

---


### Optional: ngrok for Local Preview in Docker

You can still run:

```bash
ngrok http --url=your-ngrok-static-url-frontend.app 3000
```

And use that public URL in your Teams manifest for quick cloud-like testing.

---
### Read: [AzureConfig.md](./docs/AzureConfig.md), [MsTeamsConfig.md](./docs/MsTeamsConfig.md)

### Summary

| Mode | How to Run | Notes |
| -------------- | ---------------------------------------------- | --------------------------------------------- |
| **Local Dev** | frontend + backend separately | Fast iteration, live reload |
| **Production** | `docker build && docker run` | Uses built Vite files, NestJS will serve frontend |
| **Teams Test** | Use ngrok URLs | Needed for Teams to access your local servers |

84 changes: 84 additions & 0 deletions docs/Agent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
## Teams AI Agent: System Architecture

This diagram illustrates the high-level components of the application and their relationships. It shows how our services, hosted on Azure, interact with each other and with external services to deliver the full functionality to a user within Microsoft Teams.

### Architectural Summary

1. **Client-Side:** The user interacts with the React application, which is served as a Tab inside the Microsoft Teams client. The frontend's primary responsibilities are rendering the UI, managing client-side state, and initiating authenticated API calls.
2. **Authentication:** Azure Active Directory is the identity provider. The frontend uses the Teams JS SDK to get an SSO token, which is sent with every API request. The backend validates this token on every call to ensure the request is secure and authorized.
3. **Backend:** The Node.js application, hosted on Azure App Service, is the core of the system.
* It securely loads all its secrets (API keys, connection strings) from **Azure Key Vault** using a passwordless **Managed Identity**.
* It exposes a single primary API endpoint (`/v6/mcp/agent/chat`) that uses Server-Sent Events (SSE) for real-time communication.
* It instantiates a **LangChain Agent** to handle the conversational logic.
4. **AI & Tools:** The LangChain Agent orchestrates calls to external services. It sends the user's prompt and conversation history to **AWS Bedrock** for processing and calls the **Topcoder MCP Gateway** when the AI model decides a tool is needed to answer a question.
5. **Data Persistence:** All conversation history is stored in MongoDB API, providing a scalable and durable memory for the agent.

### Sequence Diagram: A Single Chat Message with a Tool Call

This diagram illustrates the step-by-step flow of data and method calls for a "happy path" scenario where a user sends a message, the agent decides to use a tool, and then responds with a summary.

```mermaid
sequenceDiagram
participant User
participant Frontend as React Frontend
participant Backend as Node.js Backend
participant LangChain as LangChain Agent
participant CosmosDB as MongoDB
participant Bedrock as AWS Bedrock
participant MCP as Topcoder MCP

User->>Frontend: Types "Show me an active challenge" and clicks Send
Frontend->>Backend: POST /v6/mcp/agent/chat (with SSO Token)

rect rgb(230, 240, 255)
note over Backend: Middleware: `validateToken` runs
Backend->>AzureAD: Verify Token Signature (using cached public keys)
AzureAD-->>Backend: OK
end

Backend->>LangChain: Create Agent Instance
LangChain->>CosmosDB: getMessageHistory(sessionId)
CosmosDB-->>LangChain: Return previous messages

LangChain->>Bedrock: streamEvents(prompt, history, tools)
Bedrock-->>LangChain: Stream Chunks (Decides to use a tool)

loop Streaming Response to Client
LangChain-->>Backend: Yields 'thinking' chunks
Backend-->>Frontend: SSE: event: message, data: {type: "chunk", ...}
end

LangChain-->>Backend: Yields 'tool_start' event for `query-tc-challenges`
Backend-->>Frontend: SSE: event: message, data: {type: "tool_start", ...}

LangChain->>MCP: callTool('query-tc-challenges', {status: 'Active'})
MCP-->>LangChain: Return JSON result of challenges

LangChain-->>Backend: Yields 'tool_result' event with data
Backend-->>Frontend: SSE: event: message, data: {type: "tool_result", ...}

LangChain->>Bedrock: streamEvents(prompt, history, tool_result)
Bedrock-->>LangChain: Stream Final Summary Chunks

loop Streaming Final Response
LangChain-->>Backend: Yields final 'text' chunks
Backend-->>Frontend: SSE: event: message, data: {type: "chunk", ...}
end

rect rgb(255, 245, 230)
note over Backend, CosmosDB: Finalization
LangChain->>CosmosDB: addMessages(user_prompt, final_ai_response)
CosmosDB-->>LangChain: OK
Backend-->>Frontend: SSE: event: end
end
```

### Sequence Summary

1. **Request & Auth:** The user sends a prompt. The frontend sends it to the backend API along with the SSO token, which is validated.
2. **Memory Retrieval:** The LangChain agent is created and immediately fetches the conversation history from Cosmos DB to provide context for the LLM.
3. **First LLM Call:** The agent sends the full context to AWS Bedrock. Bedrock analyzes the request and decides that it needs to use the `query-tc-challenges` tool. It streams back its initial thoughts and this tool-use instruction.
4. **Tool Execution:** The backend streams the "thinking" and "tool_start" status to the frontend. It then makes a direct API call to the Topcoder MCP Gateway.
5. **Second LLM Call:** Once the tool result is received, the agent sends this new information back to AWS Bedrock, asking it to synthesize a final, human-readable answer.
6. **Final Response:** Bedrock streams the final summary. The backend relays these text chunks to the frontend, which displays them to the user.
7. **Finalization:** Once the stream is complete, the agent's memory manager saves the new user message and the final AI response back to Mongo DB for future conversations. The SSE connection is then closed.
Loading