This is a comprehensive skeleton for AI API applications with support for multiple AI model types (REST and WebSocket), input/output testing interface, and modular architecture.
- Multiple AI Model Support: Supports different AI models (General, GPT, Grok, Custom, etc.) with a pluggable architecture
- REST API Endpoints: Dedicated endpoints for each model type plus a generic endpoint
- WebSocket Support: Real-time communication for interactive AI applications
- Web Interface: Enhanced UI with input/output boxes for testing AI model responses
- Modular Architecture: Clean separation of concerns with model manager and configuration
- Configuration Management: Environment variables and configuration files
app.py: Main Flask application with SocketIO supportmodels.py: AI model manager with abstract base classesconfig.py: Configuration managementindex.html: Enhanced web interface with tabs for REST, WebSocket, and configuration.env: Environment variables (API keys, etc.)requirements.txt: Python dependencies
GET /- Main UI interfacePOST /api/test- Generic AI model endpointPOST /api/general/test- General model endpointPOST /api/grok/test- Grok model endpoint (x-ai/grok-4-fast:free via OpenRouter)POST /api/gpt/test- GPT model endpointPOST /api/custom/test- Custom model endpointGET /api/models- List available modelsGET /health- Health check endpoint
connect- Client connected eventdisconnect- Client disconnected eventtest_request- Request to process AI modelping/pong- Connection validation
- Install dependencies:
pip install -r requirements.txt - Configure environment variables in
.env - Run the application:
python3 app.py - Access the UI at
http://localhost:3000
- Tabbed interface for REST API, WebSocket, and Configuration
- Model type selector for different AI models
- Parameter configuration (temperature, max tokens, etc.)
- Real-time chat interface for WebSocket communication
- Connection status indicators
- Input/output boxes for testing AI responses
To add a new AI model:
- Create a new class extending
AIModelinmodels.py - Add the model type to the
AIModelManager - Optionally add a dedicated endpoint in
app.py
PORT- Server port (default: 3000)GENERAL_API_KEY,GPT_API_KEY,GROK_API_KEY,CUSTOM_API_KEY- API keys for different modelsGENERAL_API_URL,GPT_API_URL,CUSTOM_API_URL- API endpoints for different modelsDEFAULT_MODEL- Default model to useMODEL_*parameters - Default parameter values# ai_dev_project-