A comprehensive starter template (product roadmap application using ReactFlow) that combines Cedar-OS AI copilot components with Mastra workflows for building intelligent, streaming-capable applications.
- 🤖 AI Chat Integration: Built-in chat workflows powered by OpenAI through Mastra agents
- ⚡ Real-time Streaming: Server-sent events (SSE) for streaming AI responses
- 🎨 Beautiful UI: Cedar-OS components with 3D effects and modern design
- 🔧 Type-safe Workflows: Mastra-based backend with full TypeScript support
- 📡 Dual API Modes: Both streaming and non-streaming chat endpoints
The fastest way to get started:
npx cedar-os-cli plant-seedThen select this template when prompted. This will set up the entire project structure and dependencies automatically.
Walk through the starter repo by global searching for comments starting with [STEP X] to understand how the frontend and backend work and what features are implemented.
For more details, see the Cedar Getting Started Guide.
- Node.js 18+
- OpenAI API key
- pnpm (recommended) or npm
- Clone and install dependencies:
git clone <repository-url>
cd cedar-mastra-starter
pnpm install && cd src/backend && pnpm install && cd ../..- Set up environment variables:
Create a
.envfile in the root directory:
OPENAI_API_KEY=your-openai-api-key-here- Start the development servers:
npm run devThis runs both the Next.js frontend and Mastra backend concurrently:
- Frontend: http://localhost:3000
- Backend API: http://localhost:4111
- Simple Chat UI: See Cedar OS components in action in a pre-configured chat interface
- Cedar-OS Components: Cedar-OS Components installed in shadcn style for local changes
- Tailwind CSS, Typescript, NextJS: Patterns you're used to in any NextJS project
- Chat Workflow: Example of a Mastra workflow – a chained sequence of tasks including LLM calls
- Streaming Utils: Examples of streaming text, status updates, and objects like tool calls
- API Routes: Examples of registering endpoint handlers for interacting with the backend
POST /chat/execute-function
Content-Type: application/json
{
"prompt": "Hello, how can you help me?",
"temperature": 0.7,
"maxTokens": 1000,
"systemPrompt": "You are a helpful assistant."
}POST /chat/execute-function/stream
Content-Type: application/json
{
"prompt": "Tell me a story",
"temperature": 0.7
}Returns Server-Sent Events with:
- JSON Objects:
{ type: 'stage_update', status: 'update_begin', message: 'Generating response...'} - Text Chunks: Streamed AI response text
- Completion:
event: donesignal
# Start both frontend and backend
npm run dev
# Run frontend only
npm run dev:next
# Run backend only
npm run dev:mastraMIT License - see LICENSE file for details.