A modern web application that transforms complex topics into simple, easy-to-understand explanations using AI. Built with React, Express.js, and OpenAI integration.
- AI-Powered Explanations: Uses OpenAI's GPT models to generate simple explanations
- Interactive UI: Beautiful, responsive interface with smooth animations
- Smart Fallbacks: Works offline with mock explanations when backend is unavailable
- Real-time Processing: Get instant explanations with loading states
- Example Prompts: Pre-built examples to get you started
- Node.js (v18 or higher)
- npm or yarn
- OpenAI API key (optional - app works without it!)
-
Clone the repository
git clone https://github.com/FilipDerksen/simply-explained-buddy.git cd simply-explained-buddy -
Install frontend dependencies
npm install
-
Set up the backend
cd backend npm install cp env.example .env -
Configure OpenAI API (Optional)
- Get your API key from OpenAI Platform
- Add it to
backend/.env:OPENAI_API_KEY=sk-your-actual-key-here - Note: The app works perfectly without an API key using example explanations!
Option 1: Run Everything with One Command (Recommended)
npm run dev:fullThis starts both frontend and backend simultaneously.
Option 2: Run Separately Terminal 1 (Frontend):
npm run devFrontend runs on: http://localhost:8080
Terminal 2 (Backend):
npm run dev:backendBackend runs on: http://localhost:3001
- React 18 with TypeScript
- Vite for fast development
- Tailwind CSS for styling
- shadcn/ui components
- React Query for state management
- Express.js server
- OpenAI API integration
- CORS enabled for frontend communication
- Error handling with graceful fallbacks
simply-explained-buddy/
├── src/ # Frontend React app
│ ├── components/ # Reusable UI components
│ ├── pages/ # Application pages
│ └── lib/ # Utilities and helpers
├── backend/ # Express.js backend
│ ├── server.js # Main server file
│ ├── config.js # Configuration
│ └── .env # Environment variables
└── SETUP.md # Detailed setup guide
Backend (.env):
OPENAI_API_KEY=sk-your-actual-key-here
PORT=3001
FRONTEND_URL=http://localhost:8080
Note: OPENAI_MODEL is optional and defaults to gpt-3.5-turbo
GET /health- Server health checkPOST /api/explain- Generate explanation{ "question": "What is quantum computing?" }
- Frontend components:
src/components/ - Backend routes:
backend/server.js - Styling: Tailwind CSS classes
- Frontend: Deploy to Vercel, Netlify, or similar
- Backend: Deploy to Railway, Render, or Heroku
-
"Backend Connection Failed"
- Ensure backend server is running
- Check if OpenAI API key is configured
-
CORS Errors
- Verify
FRONTEND_URLin backend.env
- Verify
-
API Key Issues
- Confirm OpenAI API key is valid
- Check billing status on OpenAI platform
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is open source and available under the MIT License.
This project includes automated CI/CD using GitHub Actions:
- CI: Runs on every pull request - tests, lints, and builds
- Staging: Auto-deploys to staging on merge to main
- Production: Manual deployment with approval gate
Set these in your repository settings:
For Frontend Deployment:
VERCEL_TOKEN- Vercel deployment tokenVERCEL_ORG_ID- Vercel organization IDVERCEL_PROJECT_ID- Vercel project ID
For Backend Deployment:
RAILWAY_TOKEN- Railway deployment tokenRAILWAY_SERVICE_ID- Railway service ID
Optional:
OPENAI_API_KEY- OpenAI API key (only needed for AI features)
- Pull Request → Automated testing and linting
- Merge to main → Automatic staging deployment (frontend only)
- Manual trigger → Production deployment (frontend only)
Note: Backend deployment is handled separately by Railway, not through GitHub Actions.
- Original Lovable Project: https://lovable.dev/projects/7f6f33f1-7392-4985-a5fd-5fd6ffe93530
- OpenAI Platform: https://platform.openai.com/