A minimalist, high-performance Flask REST API template with built-in rate limiting and best practices. This is a great starting point for building robust APIs.
- π High-performance REST API setup
- π Built-in rate limiting (configurable)
- π CORS enabled (for cross-origin requests)
- π Clear project structure
- π Version control ready (Git pre-initialized)
- π¦ Minimal dependencies
- π€ ML model integration ready
-
Enter Project Directory:
cd SycX-API -
Activate the Virtual Environment:
# Linux/Mac source venv/bin/activate
# Windows .\venv\Scripts\activate
-
Install Dependencies:
pip install -r requirements.txt
-
Run the API:
python3 run.py
The API will start in debug mode. You'll see output in your terminal.
The project follows a clear structure:
SycX-API/
βββ app/
β βββ api/
β β βββ v1/ # API version 1
β β βββ __init__.py # Initializes the v1 API
β β βββ routes.py # Defines API endpoints
β βββ config/
β β βββ config.py # Configuration settings
β βββ models/ # Store your ML models here
β β βββ __init__.py
β β βββ trained_models/ # Directory for saved models
β βββ services/ # Business logic and model inference
β β βββ __init__.py
β βββ utils/
β β βββ helpers.py # Utility functions (e.g., rate limiting)
β βββ __init__.py # Initializes the app package
βββ tests/ # Add your unit tests here
βββ docs/ # API documentation
βββ venv/ # Virtual environment
βββ .env # Environment variables
βββ .gitignore # Git ignore rules
βββ LICENSE # License information
βββ CONTRIBUTING.md # Contribution guidelines
βββ README.md # This file!
βββ requirements.txt # Python package dependencies
βββ run.py # Main application entry point
-
Install Postman: Download and install from postman.com
-
Basic Endpoints:
-
Health Check:
- Method: GET
- URL:
http://localhost:5000/api/v1/health
-
Hello World:
- Method: GET
- URL:
http://localhost:5000/api/v1/hello
-
Hello World (POST):
- Method: POST
- URL:
http://localhost:5000/api/v1/hello - Headers:
Content-Type: application/json - Body:
{ "message": "Hello from Postman!" }
-
-
Create a New Route: In
app/api/v1/routes.py, add your new endpoint:class MyNewEndpoint(Resource): @rate_limit def get(self): return {"message": "My new endpoint"}, 200 @rate_limit def post(self): data = request.get_json() # Process your data here return {"result": "Processing complete"}, 201 # Register your new endpoint api.add_resource(MyNewEndpoint, '/my-endpoint')
-
Test Your Endpoint:
- URL:
http://localhost:5000/api/v1/my-endpoint - Methods: GET, POST
- Headers:
Content-Type: application/json
- URL:
-
Project Structure for ML:
- Place model classes in
app/models/ - Store trained models in
app/models/trained_models/ - Put inference logic in
app/services/
- Place model classes in
-
Example Model Integration:
# app/models/custom_model.py from transformers import Pipeline # or your preferred ML library class MyModel: def __init__(self): self.model = None def load_model(self, model_path): # Load your model here self.model = Pipeline.from_pretrained(model_path) def predict(self, input_data): # Make predictions return self.model(input_data)
-
Create a Service:
# app/services/model_service.py from app.models.custom_model import MyModel class ModelService: def __init__(self): self.model = MyModel() self.model.load_model('app/models/trained_models/my_model') def get_prediction(self, input_data): return self.model.predict(input_data)
-
Create an Endpoint:
# app/api/v1/routes.py from app.services.model_service import ModelService class PredictionEndpoint(Resource): def __init__(self): self.model_service = ModelService() @rate_limit def post(self): data = request.get_json() prediction = self.model_service.get_prediction(data['input']) return {'prediction': prediction}, 200 # Register endpoint api.add_resource(PredictionEndpoint, '/predict')
-
Make Prediction Request:
- Method: POST
- URL:
http://localhost:5000/api/v1/predict - Headers:
Content-Type: application/json - Body:
{ "input": "your input data here" }
-
Create Training Script: Place your training scripts in
app/models/training/:# app/models/training/train_model.py def train_model(data_path, save_path): # Load your data # Train your model # Save the model model.save(save_path) if __name__ == '__main__': train_model('path/to/data', 'app/models/trained_models/my_model')
-
Run Training:
python -m app.models.training.train_model
-
API Versioning:
- Keep different versions in separate directories (
app/api/v1/,app/api/v2/) - Use version prefix in URLs (
/api/v1/,/api/v2/)
- Keep different versions in separate directories (
-
Rate Limiting:
- Configure in
.env:RATE_LIMIT=1000 RATE_LIMIT_PERIOD=15
- Configure in
-
Error Handling:
- Use appropriate HTTP status codes
- Return descriptive error messages
- Log errors properly
-
Model Management:
- Version your models
- Keep model weights in
app/models/trained_models/ - Use environment variables for model paths
- Document model requirements and dependencies
-
Testing:
- Write unit tests in
tests/ - Test API endpoints
- Test model inference
- Run tests before deployment
- Write unit tests in
-
API Security:
- Use HTTPS in production
- Implement authentication if needed
- Validate all input data
- Set appropriate CORS policies
-
Model Security:
- Validate model inputs
- Set resource limits
- Monitor model performance
- Regular security updates
See CONTRIBUTING.md for details on how to contribute to this project.
MIT License. See LICENSE for more information.