A lightweight Docker container that reads call data directly from Asterisk's database (CDR and CEL tables) and sends it to SIPSTACK's API-Regional service for real-time call analytics.
This connector uses a database-driven approach (v0.13.0+) to collect call data:
- Direct Database Reading: Polls CDR and CEL tables directly from Asterisk's database
- Progressive Call Shipping: Ships calls in phases (initial → update → complete)
- Recording Detection: Supports database table lookup or file system monitoring
- Smart Retry Logic: Failed API calls retry with exponential backoff for up to 48 hours
- Multi-Region Support: Automatically routes to correct regional API endpoint
- Docker installed
- Asterisk 16+ with database CDR/CEL storage enabled
- PostgreSQL/MySQL database with CDR and CEL tables
- SIPSTACK API key
First, create the CDR and CEL tables in your database (if not already created/in use).
For PostgreSQL:
-- Create CDR table
CREATE TABLE cdr (
id SERIAL PRIMARY KEY,
calldate TIMESTAMP NOT NULL DEFAULT NOW(),
clid VARCHAR(80) NOT NULL DEFAULT '',
src VARCHAR(80) NOT NULL DEFAULT '',
dst VARCHAR(80) NOT NULL DEFAULT '',
dcontext VARCHAR(80) NOT NULL DEFAULT '',
channel VARCHAR(80) NOT NULL DEFAULT '',
dstchannel VARCHAR(80) NOT NULL DEFAULT '',
lastapp VARCHAR(80) NOT NULL DEFAULT '',
lastdata VARCHAR(80) NOT NULL DEFAULT '',
duration INTEGER NOT NULL DEFAULT 0,
billsec INTEGER NOT NULL DEFAULT 0,
disposition VARCHAR(45) NOT NULL DEFAULT '',
amaflags INTEGER NOT NULL DEFAULT 0,
accountcode VARCHAR(20) NOT NULL DEFAULT '',
uniqueid VARCHAR(150) NOT NULL DEFAULT '',
userfield VARCHAR(255) NOT NULL DEFAULT '',
linkedid VARCHAR(150) NOT NULL DEFAULT '',
sequence INTEGER NOT NULL DEFAULT 0,
peeraccount VARCHAR(20) NOT NULL DEFAULT ''
);
-- Create CEL table
CREATE TABLE cel (
id SERIAL PRIMARY KEY,
eventtype VARCHAR(30) NOT NULL,
eventtime TIMESTAMP NOT NULL,
cid_name VARCHAR(80) NOT NULL DEFAULT '',
cid_num VARCHAR(80) NOT NULL DEFAULT '',
cid_ani VARCHAR(80) NOT NULL DEFAULT '',
cid_rdnis VARCHAR(80) NOT NULL DEFAULT '',
cid_dnid VARCHAR(80) NOT NULL DEFAULT '',
exten VARCHAR(80) NOT NULL DEFAULT '',
context VARCHAR(80) NOT NULL DEFAULT '',
channame VARCHAR(80) NOT NULL DEFAULT '',
appname VARCHAR(80) NOT NULL DEFAULT '',
appdata VARCHAR(512) NOT NULL DEFAULT '',
amaflags INTEGER NOT NULL DEFAULT 0,
accountcode VARCHAR(20) NOT NULL DEFAULT '',
uniqueid VARCHAR(150) NOT NULL DEFAULT '',
linkedid VARCHAR(150) NOT NULL DEFAULT '',
peer VARCHAR(80) NOT NULL DEFAULT '',
userdeftype VARCHAR(255) NOT NULL DEFAULT '',
extra VARCHAR(512) NOT NULL DEFAULT ''
);
-- Create indexes for performance
CREATE INDEX idx_cdr_calldate ON cdr(calldate);
CREATE INDEX idx_cdr_linkedid ON cdr(linkedid);
CREATE INDEX idx_cel_eventtime ON cel(eventtime);
CREATE INDEX idx_cel_linkedid ON cel(linkedid);For MySQL:
-- Create CDR table
CREATE TABLE cdr (
id INT AUTO_INCREMENT PRIMARY KEY,
calldate DATETIME NOT NULL,
clid VARCHAR(80) NOT NULL DEFAULT '',
src VARCHAR(80) NOT NULL DEFAULT '',
dst VARCHAR(80) NOT NULL DEFAULT '',
dcontext VARCHAR(80) NOT NULL DEFAULT '',
channel VARCHAR(80) NOT NULL DEFAULT '',
dstchannel VARCHAR(80) NOT NULL DEFAULT '',
lastapp VARCHAR(80) NOT NULL DEFAULT '',
lastdata VARCHAR(80) NOT NULL DEFAULT '',
duration INT NOT NULL DEFAULT 0,
billsec INT NOT NULL DEFAULT 0,
disposition VARCHAR(45) NOT NULL DEFAULT '',
amaflags INT NOT NULL DEFAULT 0,
accountcode VARCHAR(20) NOT NULL DEFAULT '',
uniqueid VARCHAR(150) NOT NULL DEFAULT '',
userfield VARCHAR(255) NOT NULL DEFAULT '',
linkedid VARCHAR(150) NOT NULL DEFAULT '',
sequence INT NOT NULL DEFAULT 0,
peeraccount VARCHAR(20) NOT NULL DEFAULT '',
INDEX idx_calldate (calldate),
INDEX idx_linkedid (linkedid)
) ENGINE=InnoDB;
-- Create CEL table
CREATE TABLE cel (
id INT AUTO_INCREMENT PRIMARY KEY,
eventtype VARCHAR(30) NOT NULL,
eventtime DATETIME NOT NULL,
cid_name VARCHAR(80) NOT NULL DEFAULT '',
cid_num VARCHAR(80) NOT NULL DEFAULT '',
cid_ani VARCHAR(80) NOT NULL DEFAULT '',
cid_rdnis VARCHAR(80) NOT NULL DEFAULT '',
cid_dnid VARCHAR(80) NOT NULL DEFAULT '',
exten VARCHAR(80) NOT NULL DEFAULT '',
context VARCHAR(80) NOT NULL DEFAULT '',
channame VARCHAR(80) NOT NULL DEFAULT '',
appname VARCHAR(80) NOT NULL DEFAULT '',
appdata VARCHAR(512) NOT NULL DEFAULT '',
amaflags INT NOT NULL DEFAULT 0,
accountcode VARCHAR(20) NOT NULL DEFAULT '',
uniqueid VARCHAR(150) NOT NULL DEFAULT '',
linkedid VARCHAR(150) NOT NULL DEFAULT '',
peer VARCHAR(80) NOT NULL DEFAULT '',
userdeftype VARCHAR(255) NOT NULL DEFAULT '',
extra VARCHAR(512) NOT NULL DEFAULT '',
INDEX idx_eventtime (eventtime),
INDEX idx_linkedid (linkedid)
) ENGINE=InnoDB;Install ODBC drivers and configure the connection:
For PostgreSQL:
# Install PostgreSQL ODBC driver
apt-get install odbc-postgresql
# Configure /etc/odbcinst.ini
[PostgreSQL]
Description = PostgreSQL ODBC driver
Driver = /usr/lib/x86_64-linux-gnu/odbc/psqlodbca.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcpsqlS.so
# Configure /etc/odbc.ini
[asterisk-connector]
Description = PostgreSQL connection for Asterisk
Driver = PostgreSQL
Database = asterisk
Servername = localhost
Port = 5432
Username = asterisk
Password = your_passwordFor MySQL:
# Install MySQL ODBC driver
apt-get install unixodbc unixodbc-dev libmyodbc
# Configure /etc/odbcinst.ini
[MySQL]
Description = MySQL ODBC driver
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libodbcmyS.so
# Configure /etc/odbc.ini
[asterisk-connector]
Description = MySQL connection for Asterisk
Driver = MySQL
Server = localhost
Port = 3306
Database = asterisk
Username = asterisk
Password = your_passwordEdit /etc/asterisk/res_odbc.conf:
[asterisk]
enabled => yes
dsn => asterisk-connector
pre-connect => yes
max_connections => 5
username => asterisk
password => your_passwordEdit /etc/asterisk/cdr.conf:
[general]
enable = yes
batch = yes
size = 100
time = 300
scheduleronly = no
safeshutdown = yesEdit /etc/asterisk/cdr_odbc.conf:
[global]
dsn = asterisk
table = cdr
loguniqueid = yes
loguserfield = yes
newcdrcolumns = yesIMPORTANT: CEL is REQUIRED for complete call tracking. Without CEL, you lose:
- DNID (actual dialed number) tracking
- Call transfers and threading
- Queue events and IVR navigation
- Recording detection via MixMonitor events
- DTMF digit tracking
Check which CEL modules you have:
ls /usr/lib*/asterisk/modules/cel_*.soOption A: If you have cel_odbc.so (Best Performance)
Edit /etc/asterisk/cel.conf:
[general]
enable = yes
dateformat = %F %T.%3q
events = CHAN_START,CHAN_END,HANGUP,ANSWER,BRIDGE_ENTER,BRIDGE_EXIT,APP_START,APP_END,PARK_START,PARK_END,LINKEDID_END
[odbc]
connection = asterisk
table = celSet in connector .env:
CEL_MODE=db
DB_TABLE_CEL=celOption B: If you have cel_custom.so (Universal Compatibility)
Edit /etc/asterisk/cel_custom.conf:
[mappings]
; CSV format matching database columns
Master.csv => ${CSV_QUOTE(${eventtype})},${CSV_QUOTE(${eventtime})},${CSV_QUOTE(${CALLERID(name)})},${CSV_QUOTE(${CALLERID(num)})},${CSV_QUOTE(${CALLERID(ANI)})},${CSV_QUOTE(${CALLERID(RDNIS)})},${CSV_QUOTE(${CALLERID(DNID)})},${CSV_QUOTE(${CHANNEL(exten)})},${CSV_QUOTE(${CHANNEL(context)})},${CSV_QUOTE(${CHANNEL(channame)})},${CSV_QUOTE(${CHANNEL(appname)})},${CSV_QUOTE(${CHANNEL(appdata)})},${CSV_QUOTE(${CHANNEL(amaflags)})},${CSV_QUOTE(${CHANNEL(accountcode)})},${CSV_QUOTE(${CHANNEL(uniqueid)})},${CSV_QUOTE(${CHANNEL(linkedid)})},${CSV_QUOTE(${BRIDGEPEER})},${CSV_QUOTE(${CHANNEL(userdeftype)})},${CSV_QUOTE(${CHANNEL(extra)})Edit /etc/asterisk/cel.conf:
[general]
enable = yes
dateformat = %F %T.%3q
events = CHAN_START,CHAN_END,HANGUP,ANSWER,BRIDGE_ENTER,BRIDGE_EXIT,APP_START,APP_END,PARK_START,PARK_END,LINKEDID_ENDSet in connector .env:
CEL_MODE=csv
CEL_CSV_PATH=/var/log/asterisk/cel-custom/Master.csvAdd Docker volume mount:
volumes:
- /var/log/asterisk:/var/log/asterisk:roOption C: If you have cel_manager.so (AMI Fallback)
Edit /etc/asterisk/cel.conf:
[general]
enable = yes
dateformat = %F %T.%3q
events = CHAN_START,CHAN_END,HANGUP,ANSWER,BRIDGE_ENTER,BRIDGE_EXIT,APP_START,APP_END,PARK_START,PARK_END,LINKEDID_END
[manager]
enabled = yesSet in connector .env:
CEL_MODE=ami
AMI_HOST=localhost
AMI_PORT=5038
AMI_USERNAME=manager-sipstack
AMI_PASSWORD=your_secure_passwordEnsure AMI user has CEL read permission in /etc/asterisk/manager.conf:
[manager-sipstack]
secret = your_secure_password
read = cdr,reportingEdit /etc/asterisk/modules.conf to ensure these modules are loaded:
; Ensure these are loaded (or not set to noload)
load => res_odbc.so
load => cdr_odbc.so
load => cel_odbc.so# Reload Asterisk modules
asterisk -rx "module reload res_odbc.so"
asterisk -rx "module reload cdr_odbc.so"
asterisk -rx "module reload cel_odbc.so"
# Or restart Asterisk
systemctl restart asterisk
# Verify modules are loaded
asterisk -rx "module show like odbc"
# Test ODBC connection
asterisk -rx "odbc show"
# Verify CDR is working
asterisk -rx "cdr show status"
# Verify CEL is working
asterisk -rx "cel show status"Make a test call and verify records appear in the database:
-- Check for CDR records
SELECT * FROM cdr ORDER BY calldate DESC LIMIT 5;
-- Check for CEL events
SELECT * FROM cel ORDER BY eventtime DESC LIMIT 10;IMPORTANT: MariaDB/MySQL must be configured to listen on the network interface Docker will use.
-
Check current bind-address:
grep bind-address /etc/mysql/mariadb.conf.d/50-server.cnf # or grep bind-address /etc/my.cnf -
Update bind-address to allow Docker connections:
# In /etc/mysql/mariadb.conf.d/50-server.cnf or /etc/my.cnf # Option 1: Listen on all interfaces (simplest) bind-address = 0.0.0.0 # Option 2: Listen on specific IPs (more secure) bind-address = 127.0.0.1,172.17.0.1 # Option 3: Comment out to listen on all (MariaDB default) #bind-address = 127.0.0.1
-
Restart MariaDB/MySQL:
sudo systemctl restart mariadb # or sudo systemctl restart mysql -
Verify it's listening:
sudo netstat -tlnp | grep 3306 # Should show 0.0.0.0:3306 or 172.17.0.1:3306
-
Grant user permissions for Docker subnet:
GRANT ALL PRIVILEGES ON pbxlogs.* TO 'asterisk'@'172.%' IDENTIFIED BY 'your_password'; FLUSH PRIVILEGES;
-
Update postgresql.conf:
# In /etc/postgresql/*/main/postgresql.conf listen_addresses = '*' # or 'localhost,172.17.0.1'
-
Update pg_hba.conf:
# In /etc/postgresql/*/main/pg_hba.conf host asterisk asterisk 172.17.0.0/16 md5 -
Restart PostgreSQL:
sudo systemctl restart postgresql
After database is configured, use one of these methods:
# In .env file:
DB_HOST=172.17.0.1 # Default Docker bridge gateway
# Find your gateway with: docker network inspect bridge | grep Gateway# In docker run:
docker run --network host ...
# In .env file:
DB_HOST=localhost# In .env file:
DB_HOST=host.docker.internalStep 1: Download configuration files
curl -O https://raw.githubusercontent.com/sipstack/sipstack-connector-asterisk/main/docker-compose.yml
curl -O https://raw.githubusercontent.com/sipstack/sipstack-connector-asterisk/main/.env.exampleStep 2: Configure environment
cp .env.example .env
nano .env # Edit with your valuesEdit .env with your settings:
# Required - API Configuration
API_KEY=sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Your SIPSTACK API key
REGION=us1 # API region (ca1, us1, us2, dev)
# Required - Database Connection
DB_TYPE=mysql # Database type: mysql or postgresql
DB_HOST=localhost # Database host (use localhost with --network host)
DB_PORT=3306 # Database port (3306 for MySQL, 5432 for PostgreSQL)
DB_USER=asterisk # Database user
DB_PASSWORD=your_db_password # Database password
DB_NAME=asterisk # Database name (e.g., pbxlogs)
DB_TABLE_CDR=cdr # CDR table name (default: cdr)
# Required - CEL Configuration
CEL_MODE=db # Options: db, csv, ami
DB_TABLE_CEL=cel # CEL table name (if CEL_MODE=db)
CEL_CSV_PATH=/var/log/asterisk/cel-custom/Master.csv # CSV path (if CEL_MODE=csv)
# Optional - Recording Configuration
DB_TABLE_RECORDINGS= # Database table for recordings (e.g., recordings)
RECORDING_PATHS=/var/spool/asterisk/monitor # Recording directories (comma-separated)
# Optional - Processing Configuration
CDR_POLL_INTERVAL=5 # Database poll interval in seconds
CDR_BATCH_SIZE=100 # Records per batch
CUSTOMER_ID=1 # Your SIPSTACK customer ID
TENANT= # Optional tenant identifier
HOST_HOSTNAME= # Optional hostname identifierStep 3: Deploy
# Start the connector
docker-compose up -d
# View logs
docker-compose logs -f
# Check status
docker-compose psOption 1: Using .env file (Recommended)
Create your .env file from the example:
cp .env.example .env
nano .env # Edit with your valuesThen run with environment file:
# Load .env and run container
source .env
docker run -d \
--name sipstack-connector \
--restart unless-stopped \
--network host \
--user ${PUID:-1000}:${PGID:-1000} \
--env-file .env \
-v /var/spool/asterisk:/var/spool/asterisk:ro \
-v /var/log/asterisk:/var/log/asterisk:ro \
-v sipstack-data:/data \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
sipstack/connector-asterisk:latestOption 2: Direct environment variables
docker run -d \
--name sipstack-connector \
--restart unless-stopped \
--network host \
-e API_KEY="sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-e REGION="us1" \
-e DB_TYPE="mysql" \
-e DB_HOST="localhost" \
-e DB_PORT="3306" \
-e DB_USER="asterisk" \
-e DB_PASSWORD="your_db_password" \
-e DB_NAME="asterisk" \
-e DB_TABLE_CDR="cdr" \
-e CEL_MODE="db" \
-e DB_TABLE_CEL="cel" \
-e RECORDING_PATHS="/var/spool/asterisk/monitor" \
-v /var/spool/asterisk:/var/spool/asterisk:ro \
-v /var/log/asterisk:/var/log/asterisk:ro \
-v sipstack-data:/data \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
sipstack/connector-asterisk:latestNotes:
- The
/var/log/asteriskvolume mount is required if usingCEL_MODE=csv - Set PUID/PGID in your
.envfile to match your asterisk user (runid asterisk)
# Docker Compose
docker-compose logs -f
# Docker Run
docker logs -f sipstack-connector- 🚀 Database-Driven - Direct database polling, no AMI overhead
- 📊 Progressive Shipping - Ships calls in phases as they progress
- 🔄 Real-time Processing - Polls database every 5 seconds
- 🎯 Complete Call Data - Combines CDR and CEL for full context
- 📼 Recording Detection - Automatic recording detection via CEL events or database table
- 🔐 Secure API Access - Standard key authentication with region-based routing
- 📦 Batch Processing - Efficient batch uploads to reduce API calls
- 🔗 LinkedID Support - Complete call flow tracking
- 🌍 Multi-region Support - Choose from ca1, us1, us2, dev regions
- 🔁 Smart Retry Logic - Failed API calls retry with exponential backoff for up to 48 hours
- 📊 Prometheus Metrics - Built-in monitoring on port 8000
- 🔧 Zero Dependencies - No system packages needed on host
- ⚡ Fresh Start Mode - Uses database's last CDR timestamp to avoid processing old data
The connector uses a two-pronged approach for recording detection:
- Monitors CEL events for MixMonitor application (APP_START/APP_END)
- Extracts recording filename from appdata field
- Associates recordings with calls using linkedid
- Watches configured recording directories
- Detects new recordings that weren't caught by CEL
- Uses filename patterns to extract call metadata
| Variable | Required | Default | Description |
|---|---|---|---|
| API_KEY | Yes | - | SIPSTACK API key |
| REGION | No | us1 | API region (ca1, us1, us2) |
| LOG_LEVEL | No | INFO | Logging level (DEBUG, INFO, WARNING, ERROR) |
| Variable | Required | Default | Description |
|---|---|---|---|
| DB_TYPE | Yes | - | Database type: postgres or mysql |
| DB_HOST | Yes | - | Database hostname/IP |
| DB_PORT | No | 5432/3306 | Database port |
| DB_USER | Yes | - | Database username |
| DB_PASSWORD | Yes | - | Database password |
| DB_NAME | Yes | - | Database name |
| DB_TABLE_CDR | No | cdr | CDR table name |
| Variable | Required | Default | Description |
|---|---|---|---|
| CEL_MODE | Yes | - | CEL data source: db, csv, or ami |
| DB_TABLE_CEL | If mode=db | cel | CEL table name |
| CEL_CSV_PATH | If mode=csv | /var/log/asterisk/cel-custom/Master.csv | Path to CEL CSV file |
| CEL_CSV_POLL_INTERVAL | If mode=csv | 2 | Seconds between CSV checks |
| AMI_HOST | If mode=ami | - | Asterisk AMI hostname |
| AMI_PORT | If mode=ami | 5038 | AMI port |
| AMI_USERNAME | If mode=ami | - | AMI username |
| AMI_PASSWORD | If mode=ami | - | AMI password |
| Variable | Required | Default | Description |
|---|---|---|---|
| POLL_INTERVAL | No | 5 | Database poll interval (seconds) |
| BATCH_SIZE | No | 100 | Records per batch |
| SHIP_INCOMPLETE_AFTER | No | 30 | Ship incomplete calls after (seconds) |
| SHIP_COMPLETE_AFTER | No | 5 | Ship complete calls after (seconds) |
| MAX_RECORDS_PER_POLL | No | 1000 | Maximum records per poll cycle |
| Variable | Required | Default | Description |
|---|---|---|---|
| DB_TABLE_RECORDINGS | No | - | Database table for recordings (e.g., recordings) |
| RECORDING_PATHS | No | /var/spool/asterisk/monitor | Recording directories (comma-separated) |
| RECORDING_STABILITY_CHECK | No | 2 | Seconds to wait for file stability |
| RECORDING_BATCH_SIZE | No | 10 | Recordings per upload batch |
| RECORDING_FILE_EXTENSIONS | No | wav,mp3,gsm,ogg | File extensions to process |
| Variable | Required | Default | Description |
|---|---|---|---|
| ASTERISK_EXT_MIN_LENGTH | No | 2 | Minimum extension length |
| ASTERISK_EXT_MAX_LENGTH | No | 7 | Maximum extension length |
| ASTERISK_INTL_PREFIXES | No | 011,00,+ | International dialing prefixes |
| ASTERISK_E164_ENABLED | No | true | Enable E.164 format detection |
The connector maintains state using an internal SQLite database at /data/tracker.db:
- Tracks processed CDRs to avoid duplicates
- Stores call state for progressive shipping
- Records startup time for fresh start behavior
To persist state across container restarts:
volumes:
- sipstack-data:/data # Named volume (recommended)
# OR
- ./_data:/data # Local directory mountWhen the connector starts with no existing state:
- Queries the database for the most recent CDR timestamp
- Uses this timestamp as the starting point for processing
- Only processes CDRs created after this point
- Ignores historical CDRs to avoid processing old data
This ensures clean deployments don't flood the system with historical calls and properly handles timezone differences between the connector and database.
Access metrics at http://localhost:8000/metrics:
database_connection_status- Database connection statecdrs_processed_total- Total CDRs processedcels_processed_total- Total CEL events processedcalls_shipped_total- Calls successfully shippedrecordings_detected_total- Recordings detectedapi_request_duration_seconds- API response times
# Docker Compose
docker-compose logs -f sipstack-connector
# Docker Run
docker logs -f sipstack-connector# Test from within container
docker exec -it sipstack-connector python -c "
from database_connector import DatabaseConnector
import os
config = {
'DB_TYPE': os.getenv('DB_TYPE'),
'DB_HOST': os.getenv('DB_HOST'),
'DB_PORT': os.getenv('DB_PORT'),
'DB_USER': os.getenv('DB_USER'),
'DB_PASSWORD': os.getenv('DB_PASSWORD'),
'DB_NAME': os.getenv('DB_NAME'),
}
db = DatabaseConnector(config)
print('Database connection successful!' if db.test_connection() else 'Connection failed')
"
# Test MySQL connection from host system
mysql -h 172.17.0.1 -u asterisk -p asterisk -e "SELECT COUNT(*) FROM cdr LIMIT 1;"
# Test from inside Docker container
docker run --rm -it mysql:8.0 mysql -h 172.17.0.1 -u asterisk -p asteriskCannot connect to database
- Verify database credentials and network connectivity
- Ensure CDR and CEL tables exist
- Check firewall rules if database is remote
MySQL/MariaDB "Packet sequence number wrong" Error
This error means the database is not accepting connections from Docker's IP. You need BOTH:
- MariaDB/MySQL listening on the Docker network interface
- User permissions for the Docker subnet
Quick Fix:
-
First, check if MariaDB is listening on the right interface:
sudo netstat -tlnp | grep 3306 # If it shows 127.0.0.1:3306, it's ONLY listening on localhost # If it shows 0.0.0.0:3306, it's listening on all interfaces ✓
-
If only on localhost, update bind-address (choose the most secure option):
# Find config file: find /etc -name "*.cnf" | xargs grep -l bind-address 2>/dev/null # Edit the file (usually /etc/my.cnf or /etc/mysql/mariadb.conf.d/50-server.cnf): sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
Option A: Multiple specific IPs (Most Secure - MariaDB 10.3+):
# Listen only on localhost and Docker bridge bind-address = 127.0.0.1,172.17.0.1
Option B: Bind to Docker bridge IP only:
# If you don't need local connections bind-address = 172.17.0.1
Option C: Use Unix socket for local, TCP for Docker:
# Comment out bind-address entirely #bind-address = 127.0.0.1 # MariaDB will listen on all IPs, but use firewall rules:
Then add firewall rules:
# Allow only Docker subnet and localhost sudo iptables -A INPUT -p tcp --dport 3306 -s 172.16.0.0/12 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 3306 -s 127.0.0.1 -j ACCEPT sudo iptables -A INPUT -p tcp --dport 3306 -j DROPOption D: All interfaces (Least Secure):
bind-address = 0.0.0.0# Restart MariaDB after changes: sudo systemctl restart mariadb -
Then check MySQL host access permissions:
-- Connect to MySQL as root and check user permissions SELECT User, Host FROM mysql.user WHERE User = 'asterisk'; -- Grant access from Docker subnet (common subnet: 172.17.0.0/16) GRANT ALL PRIVILEGES ON asterisk.* TO 'asterisk'@'172.17.%' IDENTIFIED BY 'your_password'; GRANT ALL PRIVILEGES ON asterisk.* TO 'asterisk'@'172.%.%.%' IDENTIFIED BY 'your_password'; FLUSH PRIVILEGES;
-
Verify Docker network connectivity:
# Test connection from within container docker exec -it sipstack-connector mysql -h 172.17.0.1 -u asterisk -p asterisk # Check Docker bridge network docker network ls docker network inspect bridge
-
Check MySQL bind address:
# In /etc/mysql/mysql.conf.d/mysqld.cnf bind-address = 0.0.0.0 # Allow connections from any IP # OR specific to Docker bridge bind-address = 172.17.0.1
-
Restart MySQL after configuration changes:
sudo systemctl restart mysql
-
Use host networking as fallback:
# Add --network host to docker run command docker run -d --name sipstack-connector --network host ... -
Alternative: Use localhost with port mapping:
# In .env file DB_HOST=host.docker.internal # For Docker Desktop # OR DB_HOST=172.17.0.1 # For Linux Docker
No CDRs being processed
- Verify CDR and CEL are being written to database
- Check table names match configuration
- Review logs for specific errors
Recordings not detected
- Ensure CEL events include APP_START/APP_END for MixMonitor
- Verify recording paths are accessible
- Check file permissions on recording directories
High CPU usage
- Increase POLL_INTERVAL to reduce polling frequency
- Adjust BATCH_SIZE for optimal performance
- Monitor database query performance
If migrating from the AMI-based connector:
- Database Setup: Ensure Asterisk is writing CDR/CEL to database
- Stop AMI Connector:
docker stop sipstack-connector - Update Configuration: Switch from AMI settings to database settings
- Fresh Start: The new connector will only process new CDRs
- Deploy: Start the database connector
-- Add indexes for better performance
CREATE INDEX idx_cdr_calldate ON cdr(calldate);
CREATE INDEX idx_cdr_linkedid ON cdr(linkedid);
CREATE INDEX idx_cel_eventtime ON cel(eventtime);
CREATE INDEX idx_cel_linkedid ON cel(linkedid);# For high-volume systems
POLL_INTERVAL=2 # More frequent polling
BATCH_SIZE=500 # Larger batches
MAX_RECORDS_PER_POLL=5000 # Process more per cycle
# For low-volume systems
POLL_INTERVAL=30 # Less frequent polling
BATCH_SIZE=50 # Smaller batches
MAX_RECORDS_PER_POLL=500 # Process fewer per cycle- Issues: https://github.com/sipstack/sipstack-connector-asterisk/issues
- Documentation: https://docs.sipstack.com
- API Reference: https://api.sipstack.com/docs
MIT License - see LICENSE file for details