Skip to content

CedarCopilot/smail

Repository files navigation

Cedar-Mastra Starter

A comprehensive starter template (product roadmap application using ReactFlow) that combines Cedar-OS AI copilot components with Mastra workflows for building intelligent, streaming-capable applications.

Features

  • 🤖 AI Chat Integration: Built-in chat workflows powered by OpenAI through Mastra agents
  • ⚡ Real-time Streaming: Server-sent events (SSE) for streaming AI responses
  • 🎨 Beautiful UI: Cedar-OS components with 3D effects and modern design
  • 🔧 Type-safe Workflows: Mastra-based backend with full TypeScript support
  • 📡 Dual API Modes: Both streaming and non-streaming chat endpoints

Quick Start with Cedar CLI

The fastest way to get started:

npx cedar-os-cli plant-seed

Then select this template when prompted. This will set up the entire project structure and dependencies automatically.

Walk through the starter repo by global searching for comments starting with [STEP X] to understand how the frontend and backend work and what features are implemented.

For more details, see the Cedar Getting Started Guide.

Manual Setup

Prerequisites

  • Node.js 18+
  • OpenAI API key
  • pnpm (recommended) or npm

Installation

  1. Clone and install dependencies:
git clone <repository-url>
cd cedar-mastra-starter
pnpm install && cd src/backend && pnpm install && cd ../..
  1. Set up environment variables: Create a .env file in the root directory:
OPENAI_API_KEY=your-openai-api-key-here
  1. Start the development servers:
npm run dev

This runs both the Next.js frontend and Mastra backend concurrently:

Project Architecture

Frontend (Next.js + Cedar-OS)

  • Simple Chat UI: See Cedar OS components in action in a pre-configured chat interface
  • Cedar-OS Components: Cedar-OS Components installed in shadcn style for local changes
  • Tailwind CSS, Typescript, NextJS: Patterns you're used to in any NextJS project

Backend (Mastra)

  • Chat Workflow: Example of a Mastra workflow – a chained sequence of tasks including LLM calls
  • Streaming Utils: Examples of streaming text, status updates, and objects like tool calls
  • API Routes: Examples of registering endpoint handlers for interacting with the backend

API Endpoints (Mastra backend)

Non-streaming Chat

POST /chat/execute-function
Content-Type: application/json

{
  "prompt": "Hello, how can you help me?",
  "temperature": 0.7,
  "maxTokens": 1000,
  "systemPrompt": "You are a helpful assistant."
}

Streaming Chat

POST /chat/execute-function/stream
Content-Type: application/json

{
  "prompt": "Tell me a story",
  "temperature": 0.7
}

Returns Server-Sent Events with:

  • JSON Objects: { type: 'stage_update', status: 'update_begin', message: 'Generating response...'}
  • Text Chunks: Streamed AI response text
  • Completion: event: done signal

Development

Running the Project

# Start both frontend and backend
npm run dev

# Run frontend only
npm run dev:next

# Run backend only
npm run dev:mastra

Learn More

License

MIT License - see LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •