A Flask-based API service that integrates with OpenAI's GPT models to provide AI assistance through n8n workflows. This project leverages the gpt-4o-search-preview
model, which enables web search capabilities.
- REST API with a
/ask
endpoint for AI queries - Integration with OpenAI's GPT models with web search capabilities
- Docker containerization for easy deployment
- Cloudflare Tunnel support for secure public access
- n8n integration for workflow automation
- Docker and Docker Compose
- OpenAI API key
- n8n instance (for workflow integration)
- Windows or Linux environment
-
Clone this repository
-
Configure environment variables Create a
.env
file with the following:OPENAI_API_KEY=your_api_key_here RUN_API=true
-
Install dependencies (for local development only)
pip install -r requirements.txt
python main.py
-
Build the Docker image
Windows:
docker build -t openai-websearch-api .
Linux:
sudo docker build -t openai-websearch-api .
-
Start the container
The docker-compose.yml uses host network mode for better compatibility with n8n:
version: '3' services: openai-websearch-api: build: . network_mode: "host" env_file: - .env restart: unless-stopped
Start the container:
sudo docker-compose up -d
-
Verify deployment
sudo docker ps
You should see your container running and using the host network.
-
Find your host IP (for n8n configuration):
ip addr show
Look for your main network interface (usually eth0 or ens4). For example:
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> ... inet 10.138.0.2/32 ...
Note down this IP address (in this example, 10.138.0.2).
Configure an HTTP Request node in n8n:
-
Set the request details:
- URL:
http://your.host.ip:5000/ask
(e.g.,http://10.138.0.2:5000/ask
) - Method:
POST
- Headers:
Content-Type: application/json
- Body:
{ "prompt": {{$json.intent}} }
- URL:
-
Process the response in subsequent nodes using
{{$json.response}}
If you encounter connection issues:
-
Verify the API is running:
curl -X POST http://localhost:5000/ask -H "Content-Type: application/json" -d "{\"prompt\":\"test\"}"
-
Check container logs:
sudo docker logs $(sudo docker ps -q --filter ancestor=openai-websearch-api)
-
Verify network access:
# Test from the host machine curl -X POST http://your.host.ip:5000/ask -H "Content-Type: application/json" -d "{\"prompt\":\"test\"}"
-
Common issues and solutions:
- If n8n can't connect, ensure it's running on the same network or has access to the host network
- If the host IP changes, update the n8n HTTP Request node URL accordingly
- For security, consider setting up proper network isolation in production environments
-
Deploy the Docker container (follow Option 2 steps)
-
Set up Cloudflare Tunnel
Windows:
- Download cloudflared:
Invoke-WebRequest -Uri https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.exe -OutFile cloudflared.exe
- Start a tunnel:
.\cloudflared.exe tunnel --url http://localhost:5000
- Keep the tunnel running in the background:
Start-Process -NoNewWindow .\cloudflared.exe -ArgumentList "tunnel --url http://localhost:5000"
Linux:
- Download and install cloudflared:
curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb sudo dpkg -i cloudflared.deb
- Start a tunnel:
cloudflared tunnel --url http://localhost:5000
- Keep the tunnel running in the background:
nohup cloudflared tunnel --url http://localhost:5000 > cloudflared.log 2>&1 &
- Download cloudflared:
-
Note the assigned URL (e.g.,
https://something-random-name.trycloudflare.com
)
Send POST requests to the /ask
endpoint:
POST /ask
Content-Type: application/json
{
"prompt": "Your question for the AI assistant"
}
Example with curl:
curl -X POST http://localhost:5000/ask -H "Content-Type: application/json" -d "{\"prompt\":\"What is the current Bitcoin price?\"}"
Example with PowerShell (Windows):
Invoke-WebRequest -Uri "http://localhost:5000/ask" -Method POST -ContentType "application/json" -Body '{"prompt":"What is the current Bitcoin price?"}'
Configure an HTTP Request node in n8n:
-
Set the request details:
- URL:
http://localhost:5000/ask
(local) orhttps://your-tunnel-url.trycloudflare.com/ask
(with Cloudflare) - Method:
POST
- Headers:
Content-Type: application/json
- Body:
{ "prompt": {{$json.intent}} }
- URL:
-
Process the response in subsequent nodes using
{{$json.response}}
To deploy on a VPS:
-
Transfer all project files to your VPS:
Windows to Linux VPS:
scp -r ./* user@your-vps-ip:/path/to/app/
Linux to Linux VPS:
rsync -avz --exclude 'venv' --exclude '.git' ./ user@your-vps-ip:/path/to/app/
-
Install Docker and Docker Compose on VPS:
Ubuntu/Debian:
sudo apt update sudo apt install -y docker.io # Install Docker Compose V2 (plugin method) sudo apt install -y docker-compose-plugin # OR install standalone docker-compose sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose sudo systemctl enable docker sudo systemctl start docker
CentOS/RHEL:
sudo yum install -y docker # Install Docker Compose V2 (plugin method) sudo yum install -y docker-compose-plugin # OR install standalone docker-compose sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose sudo systemctl enable docker sudo systemctl start docker
-
Follow the Docker deployment steps above (using sudo for all Docker commands)
-
Install cloudflared on the VPS:
curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb sudo dpkg -i cloudflared.deb
-
Run the tunnel command on the VPS:
nohup cloudflared tunnel --url http://localhost:5000 > cloudflared.log 2>&1 &
-
To make the tunnel start automatically on system boot, create a systemd service:
sudo nano /etc/systemd/system/cloudflared.service
Add the following content:
[Unit] Description=Cloudflare Tunnel After=network.target [Service] ExecStart=/usr/local/bin/cloudflared tunnel --url http://localhost:5000 Restart=always User=ubuntu [Install] WantedBy=multi-user.target
Enable and start the service:
sudo systemctl enable cloudflared sudo systemctl start cloudflared
The Docker containers are configured to restart automatically on system boot thanks to the restart: unless-stopped
directive in docker-compose.yml. This ensures your API service comes back online automatically after a VM restart.
After a system restart, some additional services might need manual restart or verification:
-
Nginx (if used as reverse proxy):
# Check nginx status sudo systemctl status nginx # Restart nginx if needed sudo systemctl restart nginx
-
Verify all services are running:
# Check Docker containers sudo docker ps # Check nginx sudo systemctl status nginx # Check cloudflared (if using tunnel) sudo systemctl status cloudflared
To ensure nginx restarts automatically after system boot:
# Enable nginx to start on boot
sudo systemctl enable nginx
# Verify it's enabled
sudo systemctl is-enabled nginx
You can also create a simple restart script:
# Create a restart script
sudo nano /usr/local/bin/restart-services.sh
Add this content:
#!/bin/bash
sudo systemctl restart nginx
sudo docker-compose -f /path/to/your/docker-compose.yml up -d
sudo systemctl restart cloudflared # if using Cloudflare tunnel
Make it executable:
sudo chmod +x /usr/local/bin/restart-services.sh
You can run this script manually after a system restart if needed:
sudo /usr/local/bin/restart-services.sh
- API Security: Consider adding authentication for production deployments
- Cloudflare URL Changes: The free quick tunnel URL changes each time you restart the tunnel
- Rate Limiting: Be mindful of OpenAI API rate limits and costs
- Error Handling: The API includes basic error handling for OpenAI API issues
- Linux Permissions: If running on Linux, ensure proper permissions for Docker and cloudflared
- Docker without sudo: To run Docker without sudo on Linux, add your user to the docker group:
sudo usermod -aG docker $USER # Log out and log back in for changes to take effect
- Network Mode: Using host network mode provides better compatibility with n8n but may not be suitable for all production environments
- Security: When using host network mode, your API is accessible on your host's network interface. Consider implementing additional security measures
- IP Address: The host IP may change after system restart on some cloud platforms. Update your n8n configuration accordingly