PDOdb is a lightweight, framework-agnostic PHP database library providing a unified API across MySQL, PostgreSQL, and SQLite.
Built on top of PDO with zero external dependencies, it offers:
- Fluent Query Builder - Intuitive, chainable API for all database operations
- Cross-Database Compatibility - Automatic SQL dialect handling (MySQL, PostgreSQL, SQLite)
- Query Caching - PSR-16 integration for result caching (10-1000x faster queries)
- Prepared Statement Pool - Automatic statement caching with LRU eviction (20-50% faster repeated queries)
- Read/Write Splitting - Horizontal scaling with master-replica architecture and load balancing
- Window Functions - Advanced analytics with ROW_NUMBER, RANK, LAG, LEAD, running totals, moving averages
- Common Table Expressions (CTEs) - WITH clauses for complex queries, recursive CTEs for hierarchical data
- Set Operations - UNION, INTERSECT, EXCEPT for combining query results with automatic deduplication
- DISTINCT & DISTINCT ON - Remove duplicates with full PostgreSQL DISTINCT ON support
- FILTER Clause - Conditional aggregates (SQL:2003 standard) with automatic MySQL fallback to CASE WHEN
- Full-Text Search - Cross-database FTS with unified API (MySQL FULLTEXT, PostgreSQL tsvector, SQLite FTS5)
- Schema Introspection - Query indexes, foreign keys, and constraints programmatically
- Advanced Pagination - Full, simple, and cursor-based pagination with metadata
- JSON Operations - Native JSON support with consistent API across all databases
- Bulk Operations - CSV/XML/JSON loaders, multi-row inserts, UPSERT support
- Export Helpers - Export results to JSON, CSV, and XML formats
- Transactions & Locking - Full transaction support with table locking
- Batch Processing - Memory-efficient generators for large datasets
- Exception Hierarchy - Typed exceptions for precise error handling
- Connection Retry - Automatic retry with exponential backoff
- PSR-14 Event Dispatcher - Event-driven architecture for monitoring, auditing, and middleware
- 80+ Helper Functions - SQL helpers for strings, dates, math, JSON, aggregations, and more
- Fully Tested - 596 tests, 2687 assertions across all dialects
- Type-Safe - PHPStan level 8 validated, PSR-12 compliant
Inspired by ThingEngineer/PHP-MySQLi-Database-Class
- Requirements
- Installation
- 📖 Documentation
- 📚 Examples
- Quick Example
- Configuration
- Quick Start
- Query Analysis
- JSON Operations
- Advanced Usage
- Error Handling
- Performance Tips
- Debugging
- Helper Functions Reference
- Public API Reference
- Dialect Differences
- Troubleshooting
- Testing
- Database Error Codes
- Contributing
- License
- Acknowledgments
- PHP: 8.4 or higher
- PDO Extensions:
- pdo_mysqlfor MySQL/MariaDB
- pdo_pgsqlfor PostgreSQL
- pdo_sqlitefor SQLite
 
- Supported Databases:
- MySQL 5.7+ / MariaDB 10.3+
- PostgreSQL 9.4+
- SQLite 3.38+
 
Check if your SQLite has JSON support:
sqlite3 :memory: "SELECT json_valid('{}')"Install via Composer:
composer require tommyknocker/pdo-database-classFor specific versions:
# Latest 2.x version
composer require tommyknocker/pdo-database-class:^2.0
# Latest 1.x version
composer require tommyknocker/pdo-database-class:^1.0
# Development version
composer require tommyknocker/pdo-database-class:dev-masterComplete documentation is available in the documentation/ directory with 56+ detailed guides covering all features:
- Getting Started - Installation, configuration, your first connection
- Core Concepts - Connection management, query builder, parameter binding, dialects
- Query Builder - SELECT, DML, filtering, joins, aggregations, subqueries
- JSON Operations - Working with JSON across all databases
- Advanced Features - Transactions, batch processing, bulk operations, UPSERT
- Error Handling - Exception hierarchy, retry logic, logging, monitoring
- Helper Functions - Complete reference for all helper functions
- Best Practices - Security, performance, memory management, code organization
- API Reference - Complete API documentation
- Cookbook - Common patterns, real-world examples, troubleshooting
Each guide includes working code examples, dialect-specific notes, security considerations, and best practices.
Start here: Documentation Index
Comprehensive, runnable examples are available in the examples/ directory:
- Basic - Connection, CRUD, WHERE conditions
- Intermediate - JOINs, aggregations, pagination, transactions
- Advanced - Connection pooling, bulk operations, UPSERT
- JSON Operations - Complete guide to JSON features
- Helper Functions - String, math, date/time helpers
- Export Helpers - Export data to JSON, CSV, XML
- Real-World - Blog system, user auth, search, multi-tenant
- README Examples - Examples extracted from this README
- Connection Retry - Retry mechanism with logging
- Exception Handling - Comprehensive error handling
Each example is self-contained with setup instructions. See examples/README.md for the full catalog.
Quick start:
cd examples
# SQLite (ready to use, no setup required)
php 01-basic/02-simple-crud.php
# MySQL (update config.mysql.php with your credentials)
PDODB_DRIVER=mysql php 01-basic/02-simple-crud.php
# PostgreSQL (update config.pgsql.php with your credentials)
PDODB_DRIVER=pgsql php 01-basic/02-simple-crud.php
# Test all examples on all available databases
./scripts/test-examples.shEnvironment variable PDODB_DRIVER controls which database to use:
- sqlite(default) - uses- config.sqlite.php
- mysql- uses- config.mysql.php
- pgsql- uses- config.pgsql.php
If config file is missing, falls back to SQLite.
use tommyknocker\pdodb\PdoDb;
use tommyknocker\pdodb\helpers\Db;
// Initialize
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'username' => 'user',
    'password' => 'pass',
    'dbname' => 'testdb'
]);
// Simple query
$users = $db->find()
    ->from('users')
    ->where('age', 18, '>')
    ->andWhere(Db::jsonContains('tags', 'php'))
    ->orderBy('created_at', 'DESC')
    ->limit(10)
    ->get();
// Insert with JSON
$id = $db->find()->table('users')->insert([
    'name' => 'John',
    'age' => 25,
    'meta' => Db::jsonObject(['city' => 'NYC', 'active' => true])
]);use tommyknocker\pdodb\PdoDb;
$db = new PdoDb('mysql', [
    'pdo'         => null,                 // Optional. Existing PDO object. If specified, all other parameters (except prefix) are ignored.
    'host'        => '127.0.0.1',          // Required. MySQL host (e.g. 'localhost' or IP address).
    'username'    => 'testuser',           // Required. MySQL username.
    'password'    => 'testpass',           // Required. MySQL password.
    'dbname'      => 'testdb',             // Required. Database name.
    'port'        => 3306,                 // Optional. MySQL port (default is 3306).
    'prefix'      => 'my_',                // Optional. Table prefix (e.g. 'wp_').
    'charset'     => 'utf8mb4',            // Optional. Connection charset (recommended: 'utf8mb4').
    'unix_socket' => '/var/run/mysqld/mysqld.sock', // Optional. Path to Unix socket if used.
    'sslca'       => '/path/ca.pem',       // Optional. Path to SSL CA certificate.
    'sslcert'     => '/path/client-cert.pem', // Optional. Path to SSL client certificate.
    'sslkey'      => '/path/client-key.pem',  // Optional. Path to SSL client key.
    'compress'    => true                  // Optional. Enable protocol compression.
]);use tommyknocker\pdodb\PdoDb;
$db = new PdoDb('pgsql', [
    'pdo'              => null,            // Optional. Existing PDO object. If specified, all other parameters (except prefix) are ignored.
    'host'             => '127.0.0.1',     // Required. PostgreSQL host.
    'username'         => 'testuser',      // Required. PostgreSQL username.
    'password'         => 'testpass',      // Required. PostgreSQL password.
    'dbname'           => 'testdb',        // Required. Database name.
    'port'             => 5432,            // Optional. PostgreSQL port (default is 5432).
    'prefix'           => 'pg_',           // Optional. Table prefix.
    'options'          => '--client_encoding=UTF8', // Optional. Extra options (e.g. client encoding).
    'sslmode'          => 'require',       // Optional. SSL mode: disable, allow, prefer, require, verify-ca, verify-full.
    'sslkey'           => '/path/client.key',   // Optional. Path to SSL private key.
    'sslcert'          => '/path/client.crt',   // Optional. Path to SSL client certificate.
    'sslrootcert'      => '/path/ca.crt',       // Optional. Path to SSL root certificate.
    'application_name' => 'MyApp',         // Optional. Application name (visible in pg_stat_activity).
    'connect_timeout'  => 5,               // Optional. Connection timeout in seconds.
    'hostaddr'         => '192.168.1.10',  // Optional. Direct IP address (bypasses DNS).
    'service'          => 'myservice',     // Optional. Service name from pg_service.conf.
    'target_session_attrs' => 'read-write' // Optional. For clusters: any, read-write.
]);use tommyknocker\pdodb\PdoDb;
$db = new PdoDb('sqlite', [
    'pdo'   => null,                       // Optional. Existing PDO object. If specified, all other parameters (except prefix) are ignored.
    'path'  => '/path/to/database.sqlite', // Required. Path to SQLite database file.
                                           // Use ':memory:' for an in-memory database.
    'prefix'=> 'sq_',                      // Optional. Table prefix.
    'mode'  => 'rwc',                      // Optional. Open mode: ro (read-only), rw (read/write), rwc (create if not exists), memory.
    'cache' => 'shared'                    // Optional. Cache mode: shared or private.
]);Manage multiple database connections and switch between them:
use tommyknocker\pdodb\PdoDb;
// Initialize without a default connection
$db = new PdoDb();
// Add multiple connections
$db->addConnection('mysql_main', [
    'driver' => 'mysql',
    'host' => 'mysql.server.com',
    'username' => 'user',
    'password' => 'pass',
    'dbname' => 'main_db'
]);
$db->addConnection('pgsql_analytics', [
    'driver' => 'pgsql',
    'host' => 'postgres.server.com',
    'username' => 'analyst',
    'password' => 'pass',
    'dbname' => 'analytics'
]);
// Switch between connections
$users = $db->connection('mysql_main')->find()->from('users')->get();
$stats = $db->connection('pgsql_analytics')->find()->from('stats')->get();Scale horizontally with master-replica architecture. Automatically route reads to replicas and writes to master:
use tommyknocker\pdodb\PdoDb;
use tommyknocker\pdodb\connection\loadbalancer\RoundRobinLoadBalancer;
$db = new PdoDb();
// Enable read/write splitting with load balancer
$db->enableReadWriteSplitting(new RoundRobinLoadBalancer());
// Add write connection (master)
$db->addConnection('master', [
    'driver' => 'mysql',
    'host' => 'master.db.local',
    'username' => 'user',
    'password' => 'pass',
    'dbname' => 'myapp',
    'type' => 'write',
]);
// Add read connections (replicas)
$db->addConnection('replica-1', [
    'driver' => 'mysql',
    'host' => 'replica1.db.local',
    'username' => 'user',
    'password' => 'pass',
    'dbname' => 'myapp',
    'type' => 'read',
]);
$db->addConnection('replica-2', [
    'driver' => 'mysql',
    'host' => 'replica2.db.local',
    'username' => 'user',
    'password' => 'pass',
    'dbname' => 'myapp',
    'type' => 'read',
]);
$db->connection('master');
// SELECT queries automatically go to read replicas
$users = $db->find()->from('users')->get();
// INSERT/UPDATE/DELETE automatically go to write master
$id = $db->find()->table('users')->insert(['name' => 'John', 'email' => 'john@example.com']);
// Force a SELECT to read from master
$user = $db->find()->from('users')->forceWrite()->where('id', $id)->getOne();
// Enable sticky writes (reads go to master for 60s after writes)
$db->enableStickyWrites(60);Load Balancing Strategies:
- RoundRobinLoadBalancer- Distributes requests evenly in circular order
- RandomLoadBalancer- Randomly selects a replica
- WeightedLoadBalancer- Distributes proportionally based on weights
Key Features:
- Automatic query routing (SELECTs → replicas, DML → master)
- Sticky writes for read-after-write consistency
- Multiple load balancing strategies
- Health checks and automatic failover
- Transaction support (always uses master)
See:
- Documentation: Read/Write Splitting
- Example: Basic Setup
- Example: Sticky Writes
- Example: Load Balancers
Perform advanced analytics with window functions (MySQL 8.0+, PostgreSQL 9.4+, SQLite 3.25+):
use tommyknocker\pdodb\helpers\Db;
// ROW_NUMBER - Sequential numbering within partitions
$results = $db->find()
    ->from('sales')
    ->select([
        'product',
        'region',
        'amount',
        'row_num' => Db::rowNumber()
            ->partitionBy('region')
            ->orderBy('amount', 'DESC')
    ])
    ->get();
// RANK - Ranking with gaps for ties
$results = $db->find()
    ->from('students')
    ->select([
        'name',
        'score',
        'student_rank' => Db::rank()->orderBy('score', 'DESC')
    ])
    ->get();
// LAG/LEAD - Access previous/next row data
$results = $db->find()
    ->from('monthly_sales')
    ->select([
        'month',
        'revenue',
        'prev_month' => Db::lag('revenue', 1, 0)->orderBy('month'),
        'next_month' => Db::lead('revenue', 1, 0)->orderBy('month')
    ])
    ->get();
// Running totals
$results = $db->find()
    ->from('transactions')
    ->select([
        'date',
        'amount',
        'running_total' => Db::windowAggregate('SUM', 'amount')
            ->orderBy('date')
            ->rows('ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW')
    ])
    ->get();
// Moving averages (7-day)
$results = $db->find()
    ->from('metrics')
    ->select([
        'date',
        'value',
        'moving_avg_7' => Db::windowAggregate('AVG', 'value')
            ->orderBy('date')
            ->rows('ROWS BETWEEN 6 PRECEDING AND CURRENT ROW')
    ])
    ->get();
// NTILE - Divide into quartiles
$results = $db->find()
    ->from('products')
    ->select([
        'name',
        'price',
        'quartile' => Db::ntile(4)->orderBy('price')
    ])
    ->get();Available Functions:
- Db::rowNumber()- Sequential numbering
- Db::rank()- Ranking with gaps
- Db::denseRank()- Ranking without gaps
- Db::ntile(n)- Divide into n buckets
- Db::lag()- Access previous row
- Db::lead()- Access next row
- Db::firstValue()- First value in window
- Db::lastValue()- Last value in window
- Db::nthValue(n)- Nth value in window
- Db::windowAggregate(func, col)- Aggregate functions (SUM, AVG, MIN, MAX, COUNT)
Common Use Cases:
- Rankings and leaderboards
- Running totals and balances
- Moving averages and smoothing
- Period-over-period comparisons (MoM, YoY)
- Percentile and quartile analysis
- Gap detection and trend analysis
See:
Define temporary named result sets using WITH clauses for better query organization and support for hierarchical data.
use tommyknocker\pdodb\helpers\Db;
// Basic CTE with closure
$products = $pdoDb->find()
    ->with('expensive_products', function ($q) {
        $q->from('products')->where('price', 1000, '>');
    })
    ->from('expensive_products')
    ->orderBy('price', 'DESC')
    ->get();
// Multiple CTEs
$analysis = $pdoDb->find()
    ->with('electronics', function ($q) {
        $q->from('products')->where('category', 'Electronics');
    })
    ->with('furniture', function ($q) {
        $q->from('products')->where('category', 'Furniture');
    })
    ->with('combined', Db::raw('
        SELECT * FROM electronics
        UNION ALL
        SELECT * FROM furniture
    '))
    ->from('combined')
    ->orderBy('price')
    ->get();
// Recursive CTE - hierarchical data
$hierarchy = $pdoDb->find()
    ->withRecursive('category_tree', Db::raw('
        SELECT id, name, parent_id, 0 as level
        FROM categories
        WHERE parent_id IS NULL
        UNION ALL
        SELECT c.id, c.name, c.parent_id, ct.level + 1
        FROM categories c
        INNER JOIN category_tree ct ON c.parent_id = ct.id
    '), ['id', 'name', 'parent_id', 'level'])
    ->from('category_tree')
    ->orderBy('level')
    ->get();
// CTE with QueryBuilder instance
$subQuery = $pdoDb->find()
    ->from('orders')
    ->select(['customer_id', 'total' => Db::sum('amount')])
    ->groupBy('customer_id');
$results = $pdoDb->find()
    ->with('customer_totals', $subQuery)
    ->from('customers')
    ->join('customer_totals', 'customers.id = customer_totals.customer_id')
    ->select(['customers.name', 'customer_totals.total'])
    ->get();
// CTE with column list
$results = $pdoDb->find()
    ->with('product_summary', function ($q) {
        $q->from('products')->select(['name', 'price']);
    }, ['product_name', 'product_price'])
    ->from('product_summary')
    ->where('product_price', 100, '>')
    ->get();Key Features:
- Basic CTEs - Temporary named result sets for query organization
- Recursive CTEs - Process hierarchical or tree-structured data
- Multiple CTEs - Chain and combine multiple WITH clauses
- Flexible Definition - Use closures, QueryBuilder instances, or raw SQL
- Column Lists - Explicit column naming for cleaner queries
- Cross-Database - Works seamlessly on MySQL 8.0+, PostgreSQL 8.4+, SQLite 3.8.3+
Common Use Cases:
- Simplifying complex queries into logical components
- Organizational charts and reporting hierarchies
- Category trees and nested menus
- Bill of materials and part assemblies
- Graph traversal and pathfinding
- Recursive data aggregation
See:
Note: All query examples start with $db->find() which returns a QueryBuilder instance. You can chain multiple methods before executing the query with get(), getOne(), insert(), update(), or delete().
// Get one row
$user = $db->find()
    ->from('users')
    ->select(['id', 'name', 'email'])
    ->where('id', 10)
    ->getOne();
// Get multiple rows
$users = $db->find()
    ->from('users')
    ->select(['id', 'name'])
    ->where('age', 18, '>=')
    ->get();use tommyknocker\pdodb\helpers\Db;
// Single row
$id = $db->find()->table('users')->insert([
    'name' => 'Alice',
    'age'  => 30,
    'created_at' => Db::now()
]);
// Multiple rows
$rows = [
    ['name' => 'Bob', 'age' => 25],
    ['name' => 'Carol', 'age' => 28],
];
$count = $db->find()->table('users')->insertMulti($rows);use tommyknocker\pdodb\helpers\Db;
$affected = $db->find()
    ->table('users')
    ->where('id', 5)
    ->update([
        'age' => Db::inc(),  // Increment by 1
        'updated_at' => Db::now()
    ]);$affected = $db->find()
    ->table('users')
    ->where('age', 18, '<')
    ->delete();use tommyknocker\pdodb\helpers\Db;
$users = $db->find()
    ->from('users')
    ->where('status', 'active')
    ->andWhere('age', 18, '>')
    ->andWhere(Db::like('email', '%@example.com'))
    ->get();use tommyknocker\pdodb\helpers\Db;
$stats = $db->find()
    ->from('users AS u')
    ->select(['u.id', 'u.name', 'total' => Db::sum('o.amount')])
    ->leftJoin('orders AS o', 'o.user_id = u.id')
    ->groupBy('u.id')
    ->having(Db::sum('o.amount'), 1000, '>')
    ->orderBy('total', 'DESC')
    ->limit(20)
    ->get();use tommyknocker\pdodb\helpers\Db;
// Portable across MySQL, PostgreSQL, SQLite
$db->find()->table('users')->onDuplicate([
    'age' => Db::inc(),
    'updated_at' => Db::now()
])->insert([
    'email' => 'alice@example.com',  // Unique key
    'name' => 'Alice',
    'age' => 30
]);// Simple load
$db->find()->table('users')->loadCsv('/path/to/file.csv');
// With options
$db->find()->table('users')->loadCsv('/path/to/file.csv', [
    'fieldChar' => ',',
    'fieldEnclosure' => '"',
    'fields' => ['id', 'name', 'email', 'age'],
    'local' => true,
    'lineChar' => "\n",
    'linesToIgnore' => 1  // Skip header row
]);$db->find()->table('users')->loadXml('/path/to/file.xml', [
    'rowTag' => '<user>',
    'linesToIgnore' => 0
]);// Array format
$db->find()->table('products')->loadJson('/path/to/products.json');
// NDJSON format (newline-delimited)
$db->find()->table('products')->loadJson('/path/to/products.ndjson', [
    'format' => 'lines',
]);
// With options
$db->find()->table('products')->loadJson('/path/to/products.json', [
    'columns' => ['name', 'price', 'stock'],
    'batchSize' => 1000,
]);$db->startTransaction();
try {
    $userId = $db->find()->table('users')->insert(['name' => 'Alice']);
    $db->find()->table('orders')->insert(['user_id' => $userId, 'total' => 100]);
    $db->commit();
} catch (\Throwable $e) {
    $db->rollBack();
    throw $e;
}$db->lock(['users', 'orders'])->setLockMethod('WRITE');
try {
    // Perform exclusive operations
    $db->find()->table('users')->where('id', 1)->update(['balance' => 100]);
} finally {
    $db->unlock();
}PDOdb provides methods to analyze query execution plans and table structures across all supported databases.
// Analyze query execution plan
$plan = $db->find()
    ->table('users')
    ->where('age', 25, '>')
    ->orderBy('created_at', 'DESC')
    ->explain();
// Returns dialect-specific execution plan
// MySQL: id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra
// PostgreSQL: QUERY PLAN column with execution details
// SQLite: addr, opcode, p1, p2, p3, p4, p5, comment// Get detailed execution statistics
$analysis = $db->find()
    ->table('users')
    ->join('orders', 'users.id = orders.user_id')
    ->where('users.age', 25, '>')
    ->explainAnalyze();
// Returns:
// - PostgreSQL: EXPLAIN ANALYZE with actual execution times
// - MySQL: EXPLAIN FORMAT=JSON with detailed cost analysis
// - SQLite: EXPLAIN QUERY PLAN with query optimization details// Get table structure information
$structure = $db->find()
    ->table('users')
    ->describe();
// Returns dialect-specific column information:
// MySQL: Field, Type, Null, Key, Default, Extra
// PostgreSQL: column_name, data_type, is_nullable, column_default
// SQLite: cid, name, type, notnull, dflt_value, pk// Get SQL query and parameters without execution
$query = $db->find()
    ->table('users')
    ->where('age', 25, '>')
    ->andWhere('status', 'active')
    ->toSQL();
echo $query['sql'];    // "SELECT * FROM users WHERE age > :age AND status = :status"
print_r($query['params']); // ['age' => 25, 'status' => 'active']// Analyze a complex query
$complexQuery = $db->find()
    ->table('users')
    ->join('orders', 'users.id = orders.user_id')
    ->join('products', 'orders.product_id = products.id')
    ->where('users.created_at', '2023-01-01', '>')
    ->andWhere('orders.status', 'completed')
    ->groupBy('users.id')
    ->having('COUNT(orders.id)', 5, '>')
    ->orderBy('users.created_at', 'DESC');
// Get execution plan
$plan = $complexQuery->explain();
// Get detailed analysis
$analysis = $complexQuery->explainAnalyze();
// Check table structures
$usersStructure = $db->find()->table('users')->describe();
$ordersStructure = $db->find()->table('orders')->describe();PDOdb provides a unified JSON API that works consistently across MySQL, PostgreSQL, and SQLite.
use tommyknocker\pdodb\helpers\Db;
$db->find()->table('users')->insert([
    'name' => 'John',
    'meta' => Db::jsonObject(['city' => 'NYC', 'age' => 30, 'verified' => true]),
    'tags' => Db::jsonArray('php', 'mysql', 'docker')
]);use tommyknocker\pdodb\helpers\Db;
// Find users older than 25
$adults = $db->find()
    ->from('users')
    ->where(Db::jsonPath('meta', ['age'], '>', 25))
    ->get();
// Multiple JSON conditions
$active = $db->find()
    ->from('users')
    ->where(Db::jsonPath('meta', ['age'], '>', 25))
    ->andWhere(Db::jsonContains('tags', 'php'))
    ->andWhere(Db::jsonExists('meta', ['verified']))
    ->get();use tommyknocker\pdodb\helpers\Db;
// Single value
$phpDevs = $db->find()
    ->from('users')
    ->where(Db::jsonContains('tags', 'php'))
    ->get();
// Multiple values (subset matching)
$fullStack = $db->find()
    ->from('users')
    ->where(Db::jsonContains('tags', ['php', 'mysql']))  // Must have both
    ->get();use tommyknocker\pdodb\helpers\Db;
$withCity = $db->find()
    ->from('users')
    ->where(Db::jsonExists('meta', ['city']))
    ->get();use tommyknocker\pdodb\helpers\Db;
$users = $db->find()
    ->from('users')
    ->select([
        'id',
        'name',
        'city' => Db::jsonGet('meta', ['city']),
        'age' => Db::jsonGet('meta', ['age'])
    ])
    ->get();use tommyknocker\pdodb\helpers\Db;
$sorted = $db->find()
    ->from('users')
    ->orderBy(Db::jsonGet('meta', ['age']), 'DESC')
    ->get();use tommyknocker\pdodb\helpers\Db;
$users = $db->find()
    ->from('users')
    ->select([
        'id',
        'tag_count' => Db::jsonLength('tags')
    ])
    ->where(Db::jsonLength('tags'), 3, '>')
    ->get();use tommyknocker\pdodb\helpers\Db;
$users = $db->find()
    ->from('users')
    ->select([
        'id',
        'tags_type' => Db::jsonType('tags')  // 'array', 'object', 'string', etc.
    ])
    ->get();use tommyknocker\pdodb\helpers\Db;
// Update JSON field using QueryBuilder method
$db->find()
    ->table('users')
    ->where('id', 1)
    ->update([
        'meta' => $db->find()->jsonSet('meta', ['city'], 'London')
    ]);
// Remove JSON field
$db->find()
    ->table('users')
    ->where('id', 1)
    ->update([
        'meta' => $db->find()->jsonRemove('meta', ['old_field'])
    ]);use tommyknocker\pdodb\helpers\Db;
// Raw SELECT
$users = $db->rawQuery(
    'SELECT * FROM users WHERE age > :age AND city = :city',
    ['age' => 18, 'city' => 'NYC']
);
// Single row
$user = $db->rawQueryOne(
    'SELECT * FROM users WHERE id = :id',
    ['id' => 10]
);
// Single value
$count = $db->rawQueryValue(
    'SELECT COUNT(*) FROM users WHERE status = :status',
    ['status' => 'active']
);use tommyknocker\pdodb\helpers\Db;
// Using helper functions where possible
$db->find()
    ->table('users')
    ->where('id', 5)
    ->update([
        'age' => Db::raw('age + :inc', ['inc' => 5]), // No helper for arithmetic
        'name' => Db::concat('name', '_updated')      // Using CONCAT helper
    ]);use tommyknocker\pdodb\helpers\Db;
// Nested OR conditions
$users = $db->find()
    ->from('users')
    ->where('status', 'active')
    ->andWhere(function($qb) {
        $qb->where('age', 18, '>')
           ->orWhere('verified', 1);
    })
    ->get();
// IN condition
$users = $db->find()
    ->from('users')
    ->where(Db::in('id', [1, 2, 3, 4, 5]))
    ->get();
// BETWEEN
$users = $db->find()
    ->from('users')
    ->where(Db::between('age', 18, 65))
    ->get();
// IS NULL / IS NOT NULL
$users = $db->find()
    ->from('users')
    ->where(Db::isNull('deleted_at'))
    ->andWhere(Db::isNotNull('email'))
    ->get();// Method 1: Using callable functions
$users = $db->find()
    ->from('users')
    ->whereIn('id', function($query) {
        $query->from('orders')
            ->select('user_id')
            ->where('total', 1000, '>');
    })
    ->get();
// Method 2: Using QueryBuilder instance directly
$orderSubquery = $db->find()
    ->from('orders')
    ->select('user_id')
    ->where('total', 1000, '>');
$users = $db->find()
    ->from('users')
    ->where('id', $orderSubquery, 'IN')
    ->get();
// Method 3: WHERE EXISTS with callable (automatic external reference detection)
$users = $db->find()
    ->from('users')
    ->whereExists(function($query) {
        $query->from('orders')
            ->where('user_id', 'users.id')  // Automatically detected as external reference
            ->where('status', 'completed');
    })
    ->get();
// Method 4: WHERE EXISTS with QueryBuilder instance (automatic external reference detection)
$orderExistsQuery = $db->find()
    ->from('orders')
    ->where('user_id', 'users.id')  // Automatically detected as external reference
    ->where('status', 'completed');
$users = $db->find()
    ->from('users')
    ->whereExists($orderExistsQuery)
    ->get();
// Method 5: Complex subquery in SELECT (using helper functions)
$users = $db->find()
    ->from('users AS u')
    ->select([
        'u.id',
        'u.name',
        'order_count' => Db::raw('(SELECT ' . Db::count()->getValue() . ' FROM orders o WHERE o.user_id = u.id)')
    ])
    ->get();
// Method 6: Multiple subqueries with different approaches
$highValueOrders = $db->find()
    ->from('orders')
    ->select('user_id')
    ->where('total', 1000, '>');
$bannedUsers = $db->find()
    ->from('bans')
    ->select('user_id')
    ->where('active', 1);
$users = $db->find()
    ->from('users')
    ->where('id', $highValueOrders, 'IN')
    ->whereNotExists(function($query) use ($bannedUsers) {
        $query->from('bans')
            ->where('user_id', Db::raw('users.id'))
            ->where('active', 1);
    })
    ->get();The library automatically detects external table references in subqueries and converts them to RawValue objects. This means you can write natural SQL without manually wrapping external references:
// ✅ Automatic detection - no need for Db::raw()
$users = $db->find()
    ->from('users')
    ->whereExists(function($query) {
        $query->from('orders')
            ->where('user_id', 'users.id')        // Automatically detected
            ->where('created_at', 'users.created_at', '>')  // Automatically detected
            ->where('status', 'completed');
    })
    ->get();
// ✅ Works in SELECT expressions
$users = $db->find()
    ->from('users')
    ->select([
        'id',
        'name',
        'total_orders' => 'COUNT(orders.id)',     // Automatically detected
        'last_order' => 'MAX(orders.created_at)' // Automatically detected
    ])
    ->leftJoin('orders', 'orders.user_id = users.id')
    ->groupBy('users.id', 'users.name')
    ->get();
// ✅ Works in ORDER BY
$users = $db->find()
    ->from('users')
    ->select(['users.id', 'users.name', 'total' => 'SUM(orders.amount)'])
    ->leftJoin('orders', 'orders.user_id = users.id')
    ->groupBy('users.id', 'users.name')
    ->orderBy('total', 'DESC')  // Automatically detected
    ->get();
// ✅ Works in GROUP BY
$results = $db->find()
    ->from('orders')
    ->select(['user_id', 'total' => 'SUM(amount)'])
    ->groupBy('user_id')  // Internal reference - not converted
    ->get();
// ✅ Works with table aliases
$users = $db->find()
    ->from('users AS u')
    ->whereExists(function($query) {
        $query->from('orders AS o')
            ->where('o.user_id', 'u.id')  // Automatically detected with aliases
            ->where('o.status', 'completed');
    })
    ->get();Detection Rules:
- Pattern: table.columnoralias.column
- Only converts if the table/alias is not in the current query's FROM clause
- Works in: where(),select(),orderBy(),groupBy(),having()
- Internal references (tables in current query) are not converted
- Invalid patterns (like 123.invalid) are not converted
// Specify schema explicitly
$users = $db->find()->from('public.users')->get();
$archived = $db->find()->from('archive.old_users')->get();
// Cross-schema JOIN
$data = $db->find()
    ->from('public.users AS u')
    ->leftJoin('archive.orders AS o', 'o.user_id = u.id')
    ->get();PDOdb supports multiple convenient ways to order results:
// Single column
$users = $db->find()->from('users')->orderBy('name', 'ASC')->get();
// Multiple columns (chained)
$users = $db->find()
    ->from('users')
    ->orderBy('status', 'ASC')
    ->orderBy('created_at', 'DESC')
    ->get();
// Array with explicit directions
$users = $db->find()
    ->from('users')
    ->orderBy(['status' => 'ASC', 'created_at' => 'DESC'])
    ->get();
// Array with default direction
$users = $db->find()
    ->from('users')
    ->orderBy(['status', 'name'], 'DESC')
    ->get();
// Comma-separated string
$users = $db->find()
    ->from('users')
    ->orderBy('status ASC, created_at DESC, name ASC')
    ->get();See Ordering & Pagination Documentation for more examples.
PDOdb offers three pagination styles for different use cases:
// Traditional page-number pagination
$result = $db->find()
    ->from('posts')
    ->orderBy('created_at', 'DESC')
    ->paginate(20, 1); // 20 per page, page 1
echo "Page {$result->currentPage()} of {$result->lastPage()}\n";
echo "Total: {$result->total()} items\n";
echo "Showing: {$result->from()}-{$result->to()}\n";
// JSON API response
header('Content-Type: application/json');
echo json_encode($result);// Infinite scroll / "Load More" pattern
$result = $db->find()
    ->from('posts')
    ->orderBy('created_at', 'DESC')
    ->simplePaginate(20, 1);
if ($result->hasMorePages()) {
    echo '<button data-page="' . ($result->currentPage() + 1) . '">Load More</button>';
}
// JSON response (no COUNT query)
echo json_encode($result);// Stable pagination for millions of rows
$result = $db->find()
    ->from('posts')
    ->orderBy('id', 'DESC')
    ->cursorPaginate(20); // First page
// Next page using cursor
if ($result->hasMorePages()) {
    $result2 = $db->find()
        ->from('posts')
        ->orderBy('id', 'DESC')
        ->cursorPaginate(20, $result->nextCursor());
}
// JSON response with encoded cursors
echo json_encode($result);$result = $db->find()
    ->from('posts')
    ->where('status', 'published')
    ->paginate(20, 2, [
        'path' => '/api/posts',
        'query' => ['status' => 'published']
    ]);
echo $result->nextPageUrl();
// Output: /api/posts?status=published&page=3| Type | Queries | Performance (1M rows) | Use Case | 
|---|---|---|---|
| Full | 2 (COUNT + SELECT) | ~200ms | Page numbers needed | 
| Simple | 1 (SELECT +1) | ~50ms | Infinite scroll | 
| Cursor | 1 (SELECT WHERE) | ~30ms | Large datasets, real-time | 
See Pagination Documentation for more details.
For processing large datasets efficiently, PDOdb provides three generator-based methods:
// Process data in chunks of 100 records
foreach ($db->find()->from('users')->orderBy('id')->batch(100) as $batch) {
    echo "Processing batch of " . count($batch) . " users\n";
    
    foreach ($batch as $user) {
        // Process each user in the batch
        processUser($user);
    }
}// Process records individually with internal buffering
foreach ($db->find()
    ->from('users')
    ->where('active', 1)
    ->orderBy('id')
    ->each(50) as $user) {
    
    // Process individual user
    sendEmail($user['email']);
}// Most memory-efficient for very large datasets
foreach ($db->find()
    ->from('users')
    ->where('age', 18, '>=')
    ->cursor() as $user) {
    
    // Stream processing with minimal memory usage
    exportUser($user);
}function exportUsersToCsv($db, $filename) {
    $file = fopen($filename, 'w');
    fputcsv($file, ['ID', 'Name', 'Email', 'Age']);
    
    foreach ($db->find()
        ->from('users')
        ->orderBy('id')
        ->cursor() as $user) {
        
        fputcsv($file, [
            $user['id'],
            $user['name'],
            $user['email'],
            $user['age']
        ]);
    }
    
    fclose($file);
}
// Export 1M+ users without memory issues
exportUsersToCsv($db, 'users_export.csv');| Method | Memory Usage | Best For | 
|---|---|---|
| get() | High (loads all data) | Small datasets, complex processing | 
| batch() | Medium (configurable chunks) | Bulk operations, parallel processing | 
| each() | Medium (internal buffering) | Individual record processing | 
| cursor() | Low (streaming) | Large datasets, simple processing | 
PDOdb supports PSR-16 (Simple Cache) for caching query results to improve performance.
use Symfony\Component\Cache\Adapter\FilesystemAdapter;
use Symfony\Component\Cache\Psr16Cache;
use tommyknocker\pdodb\PdoDb;
// Create PSR-16 cache
$adapter = new FilesystemAdapter();
$cache = new Psr16Cache($adapter);
// Pass cache to PdoDb
$db = new PdoDb(
    'mysql',
    [
        'host' => 'localhost',
        'dbname' => 'myapp',
        'username' => 'user',
        'password' => 'pass',
        'cache' => [
            'prefix' => 'myapp_',        // Optional: cache key prefix
            'default_ttl' => 3600,       // Optional: default TTL in seconds
            'enabled' => true,           // Optional: enable/disable
        ],
    ],
    [],
    null,
    $cache  // PSR-16 cache instance
);// Cache for 1 hour (3600 seconds)
$products = $db->find()
    ->from('products')
    ->where('category', 'Electronics')
    ->cache(3600)  // Enable caching
    ->get();
// Custom cache key
$featured = $db->find()
    ->from('products')
    ->where('featured', 1)
    ->cache(3600, 'featured_products')  // Custom key
    ->get();
// Disable caching for specific query
$fresh = $db->find()
    ->from('products')
    ->cache(3600)
    ->noCache()  // Override and disable
    ->get();// Cache get() - all rows
$all = $db->find()->from('users')->cache(600)->get();
// Cache getOne() - single row
$user = $db->find()->from('users')->where('id', 1)->cache(600)->getOne();
// Cache getValue() - single value
$count = $db->find()->from('users')->select([Db::count()])->cache(600)->getValue();
// Cache getColumn() - column values
$names = $db->find()->from('users')->select('name')->cache(600)->getColumn();| Option | Type | Default | Description | 
|---|---|---|---|
| prefix | string | 'pdodb_' | Cache key prefix for namespacing | 
| default_ttl | int | 3600 | Default time-to-live in seconds | 
| enabled | bool | true | Global cache enable/disable | 
- Symfony Cache (recommended): symfony/cache
- Redis: Via Symfony Redis adapter
- APCu: Via Symfony APCu adapter
- Memcached: Via Symfony Memcached adapter
- Filesystem: Via Symfony Filesystem adapter
| Operation | Database | Cached | Speedup | 
|---|---|---|---|
| Simple SELECT | 5-10ms | 0.1-0.5ms | 10-100x | 
| Complex JOIN | 50-500ms | 0.1-0.5ms | 100-1000x | 
| Aggregation | 100-1000ms | 0.1-0.5ms | 200-2000x | 
See: Query Caching Documentation and Examples
PDOdb includes automatic prepared statement caching to reduce overhead from PDO::prepare() calls. This provides a 20-50% performance boost for repeated queries.
use tommyknocker\pdodb\PdoDb;
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'myapp',
    'username' => 'user',
    'password' => 'pass',
    'stmt_pool' => [
        'enabled' => true,
        'capacity' => 256  // Maximum number of cached statements
    ]
]);The pool uses an LRU (Least Recently Used) cache algorithm:
- Frequently used statements stay in cache
- Less used statements are evicted when capacity is reached
- Each connection has its own pool
- Works transparently with all query types
| Option | Type | Default | Description | 
|---|---|---|---|
| enabled | bool | false | Enable/disable statement pooling | 
| capacity | int | 256 | Maximum number of cached statements (LRU eviction) | 
| Scenario | Without Pool | With Pool | Improvement | 
|---|---|---|---|
| Repeated SELECT | 100% | 75-80% | 20-25% faster | 
| Repeated INSERT/UPDATE | 100% | 60-70% | 30-40% faster | 
| Mixed queries (low repetition) | 100% | 95-100% | Minimal | 
$pool = $db->connection->getStatementPool();
if ($pool !== null) {
    echo "Hits: " . $pool->getHits() . "\n";
    echo "Misses: " . $pool->getMisses() . "\n";
    echo "Hit Rate: " . ($pool->getHitRate() * 100) . "%\n";
    echo "Cached Statements: " . $pool->size() . "\n";
}Recommended for:
- Applications with high query repetition (e.g., web apps with common queries)
- Batch processing with similar queries
- Long-running processes with repeated operations
Not recommended for:
- Applications with unique queries (low repetition)
- Memory-constrained environments (though pool overhead is minimal)
- Development/debugging (slight complexity increase)
// Per connection
$db = new PdoDb('mysql', [
    'stmt_pool' => ['enabled' => false]
]);
// Or disable at runtime
$pool = $db->connection->getStatementPool();
if ($pool !== null) {
    $pool->setEnabled(false);
}See: Connection Management Documentation
PDOdb integrates with PSR-14 Event Dispatcher to provide event-driven monitoring, auditing, and middleware capabilities.
use tommyknocker\pdodb\PdoDb;
use Psr\EventDispatcher\EventDispatcherInterface;
use Symfony\Component\EventDispatcher\EventDispatcher; // Or any PSR-14 implementation
$dispatcher = new EventDispatcher();
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'myapp',
    'username' => 'user',
    'password' => 'pass'
]);
$db->setEventDispatcher($dispatcher);| Event | When Fired | Use Cases | 
|---|---|---|
| ConnectionOpenedEvent | When a connection is opened | Connection monitoring, DSN logging | 
| QueryExecutedEvent | After successful query execution | Query logging, performance monitoring, metrics | 
| QueryErrorEvent | When a query error occurs | Error tracking, alerting, debugging | 
| TransactionStartedEvent | When a transaction begins | Transaction monitoring, audit logs | 
| TransactionCommittedEvent | When a transaction is committed | Audit trails, performance metrics | 
| TransactionRolledBackEvent | When a transaction is rolled back | Error tracking, audit logs | 
use tommyknocker\pdodb\events\QueryExecutedEvent;
$dispatcher->addListener(QueryExecutedEvent::class, function (QueryExecutedEvent $event) {
    echo sprintf(
        "Query: %s (%.2f ms, %d rows)\n",
        substr($event->getSql(), 0, 50),
        $event->getExecutionTime(),
        $event->getRowsAffected()
    );
});use tommyknocker\pdodb\events\TransactionStartedEvent;
use tommyknocker\pdodb\events\TransactionCommittedEvent;
$dispatcher->addListener(TransactionStartedEvent::class, function ($event) {
    error_log("Transaction started on " . $event->getDriver());
});
$dispatcher->addListener(TransactionCommittedEvent::class, function ($event) {
    error_log(sprintf(
        "Transaction committed (duration: %.2f ms)",
        $event->getDuration()
    ));
});- Monitoring: Track all database operations in real-time
- Auditing: Maintain complete audit trails of database activity
- Middleware: Implement cross-cutting concerns (logging, metrics, caching)
- Debugging: Capture query details for troubleshooting
- Performance Analysis: Measure execution times and identify slow queries
Works with any PSR-14 compatible event dispatcher:
- Symfony EventDispatcher
- League Event
- Custom implementations
PDOdb provides a comprehensive exception hierarchy for better error handling and debugging. All exceptions extend PDOException for backward compatibility.
use tommyknocker\pdodb\exceptions\{
    DatabaseException,           // Base exception class
    ConnectionException,         // Connection-related errors
    QueryException,             // Query execution errors
    ConstraintViolationException, // Constraint violations
    TransactionException,       // Transaction errors
    AuthenticationException,     // Authentication errors
    TimeoutException,           // Timeout errors
    ResourceException           // Resource exhaustion
};use tommyknocker\pdodb\PdoDb;
use tommyknocker\pdodb\exceptions\ConnectionException;
use tommyknocker\pdodb\exceptions\AuthenticationException;
try {
    $db = new PdoDb('mysql', [
        'host' => 'localhost',
        'username' => 'user',
        'password' => 'pass',
        'dbname' => 'db'
    ]);
} catch (ConnectionException $e) {
    error_log("Connection failed: " . $e->getMessage());
    // Connection errors are retryable
    if ($e->isRetryable()) {
        // Implement retry logic
    }
} catch (AuthenticationException $e) {
    error_log("Authentication failed: " . $e->getMessage());
    // Authentication errors are not retryable
}use tommyknocker\pdodb\exceptions\QueryException;
use tommyknocker\pdodb\exceptions\ConstraintViolationException;
try {
    $users = $db->find()
        ->from('users')
        ->where('invalid_column', 1)
        ->get();
} catch (QueryException $e) {
    error_log("Query error: " . $e->getMessage());
    error_log("SQL: " . $e->getQuery());
} catch (ConstraintViolationException $e) {
    error_log("Constraint violation: " . $e->getMessage());
    error_log("Constraint: " . $e->getConstraintName());
    error_log("Table: " . $e->getTableName());
    error_log("Column: " . $e->getColumnName());
}use tommyknocker\pdodb\exceptions\TransactionException;
$db->startTransaction();
try {
    $userId = $db->find()->table('users')->insert(['name' => 'Alice']);
    $db->find()->table('orders')->insert(['user_id' => $userId, 'total' => 100]);
    $db->commit();
} catch (TransactionException $e) {
    $db->rollBack();
    error_log("Transaction failed: " . $e->getMessage());
    
    if ($e->isRetryable()) {
        // Implement retry logic for deadlocks, etc.
    }
    throw $e;
}function executeWithRetry(callable $operation, int $maxRetries = 3): mixed
{
    $attempt = 0;
    $lastException = null;
    
    while ($attempt < $maxRetries) {
        try {
            return $operation();
        } catch (ConnectionException $e) {
            $lastException = $e;
            $attempt++;
            
            if ($attempt < $maxRetries) {
                sleep(2 ** $attempt); // Exponential backoff
            }
        } catch (TimeoutException $e) {
            $lastException = $e;
            $attempt++;
            
            if ($attempt < $maxRetries) {
                sleep(2 ** $attempt);
            }
        } catch (ResourceException $e) {
            $lastException = $e;
            $attempt++;
            
            if ($attempt < $maxRetries) {
                sleep(2 ** $attempt);
            }
        } catch (DatabaseException $e) {
            // Non-retryable errors
            throw $e;
        }
    }
    
    throw $lastException;
}
// Usage
$result = executeWithRetry(function() use ($db) {
    return $db->find()->from('users')->get();
});use tommyknocker\pdodb\exceptions\DatabaseException;
function handleDatabaseError(DatabaseException $e): void
{
    $errorData = $e->toArray();
    
    // Log structured error data
    error_log(json_encode([
        'timestamp' => date('c'),
        'exception_type' => $errorData['exception'],
        'message' => $errorData['message'],
        'code' => $errorData['code'],
        'driver' => $errorData['driver'],
        'category' => $errorData['category'],
        'retryable' => $errorData['retryable'],
        'query' => $errorData['query'],
        'context' => $errorData['context']
    ]));
    
    // Send alerts for critical errors
    if ($e instanceof AuthenticationException || 
        $e instanceof ResourceException) {
        sendCriticalAlert($e);
    }
}
try {
    $users = $db->find()->from('users')->get();
} catch (DatabaseException $e) {
    handleDatabaseError($e);
    throw $e;
}All exceptions provide rich context information:
catch (DatabaseException $e) {
    // Basic properties
    echo "Message: " . $e->getMessage() . "\n";
    echo "Code: " . $e->getCode() . "\n";
    echo "Driver: " . $e->getDriver() . "\n";
    echo "Query: " . $e->getQuery() . "\n";
    echo "Category: " . $e->getCategory() . "\n";
    echo "Retryable: " . ($e->isRetryable() ? 'Yes' : 'No') . "\n";
    
    // Context information
    $context = $e->getContext();
    echo "Context: " . json_encode($context) . "\n";
    
    // Add custom context
    $e->addContext('user_id', 123);
    
    // Convert to array for logging
    $errorData = $e->toArray();
}catch (ConstraintViolationException $e) {
    echo "Constraint: " . $e->getConstraintName() . "\n";
    echo "Table: " . $e->getTableName() . "\n";
    echo "Column: " . $e->getColumnName() . "\n";
    
    // Handle specific constraint violations
    if ($e->getConstraintName() === 'unique_email') {
        // Handle duplicate email
        $existingUser = $db->find()
            ->from('users')
            ->where('email', $email)
            ->getOne();
        
        if ($existingUser) {
            // Update existing user instead
            $db->find()
                ->table('users')
                ->where('email', $email)
                ->update(['last_login' => date('Y-m-d H:i:s')]);
        }
    }
}catch (TimeoutException $e) {
    echo "Timeout: " . $e->getTimeoutSeconds() . "s\n";
    
    // Implement timeout-specific handling
    if ($e->getTimeoutSeconds() > 30) {
        // Long timeout - might be a complex query
        logSlowQuery($e->getQuery());
    }
}
catch (ResourceException $e) {
    echo "Resource Type: " . $e->getResourceType() . "\n";
    
    // Handle resource exhaustion
    if ($e->getResourceType() === 'connections') {
        // Implement connection pooling or queuing
        queueRequest();
    }
}All new exceptions extend PDOException, so existing code continues to work:
// Old way (still works)
try {
    $result = $db->query('SELECT * FROM users');
} catch (PDOException $e) {
    // Generic error handling
}
// New way (recommended)
try {
    $result = $db->query('SELECT * FROM users');
} catch (ConnectionException $e) {
    // Handle connection issues specifically
} catch (QueryException $e) {
    // Handle query issues specifically
} catch (DatabaseException $e) {
    // Handle any other database issues
}// ❌ Slow: Multiple single inserts
foreach ($users as $user) {
    $db->find()->table('users')->insert($user);
}
// ✅ Fast: Single batch insert
$db->find()->table('users')->insertMulti($users);// ❌ Dangerous: Can load millions of rows
$users = $db->find()->from('users')->get();
// ✅ Safe: Limited results
$users = $db->find()->from('users')->limit(1000)->get();For frequently queried JSON paths, create indexes (MySQL 5.7+):
-- Create virtual column + index
ALTER TABLE users 
ADD COLUMN meta_age INT AS (JSON_EXTRACT(meta, '$.age'));
CREATE INDEX idx_meta_age ON users(meta_age);// ✅ Good: Reuse connection for multiple operations
$db->addConnection('main', $config);
$users = $db->connection('main')->find()->from('users')->get();
$orders = $db->connection('main')->find()->from('orders')->get();Set appropriate timeouts for different operations:
// Set timeout for long-running queries
$db->setTimeout(60); // 60 seconds
// Check current timeout
$currentTimeout = $db->getTimeout();
// Different timeouts for different connections
$db->addConnection('fast', $fastConfig);
$db->addConnection('slow', $slowConfig);
$db->connection('fast')->setTimeout(5);   // Quick queries
$db->connection('slow')->setTimeout(300); // Long-running reportsAll queries automatically use prepared statements - no action needed!
// Automatically uses prepared statements
$users = $db->find()
    ->from('users')
    ->where('age', 18, '>')
    ->get();use tommyknocker\pdodb\PdoDb;
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
// Create logger
$logger = new Logger('database');
$logger->pushHandler(new StreamHandler('php://stdout'));
// Initialize with logger
$db = new PdoDb('mysql', $config, [], $logger);
// All queries will be logged with parameters
$users = $db->find()->from('users')->get();// Build query but don't execute
$query = $db->find()
    ->from('users')
    ->where('age', 18, '>');
// Get generated SQL
$sql = $query->getLastQuery();
echo "SQL: " . $sql . "\n";
// Get bound parameters
$params = $query->getLastParams();
print_r($params);$users = $db->find()->from('users')->where('age', 18, '>')->get();
echo "Found " . count($users) . " users\n";
echo "Memory used: " . memory_get_usage(true) / 1024 / 1024 . " MB\n";use tommyknocker\pdodb\helpers\Db;
// Raw SQL expressions (when no helper available)
Db::raw('age + :years', ['years' => 5])
// External table references in subqueries (manual)
Db::ref('users.id')  // Equivalent to Db::raw('users.id')
// Using helper functions when available
Db::concat('first_name', ' ', 'last_name')
// Escape strings
Db::escape("O'Reilly")
// Configuration
Db::config('FOREIGN_KEY_CHECKS', 1)Additional frequently used helpers:
// Strings
Db::left('name', 2);
Db::right('name', 2);
Db::position(Db::raw("'@'"), 'email');
Db::repeat(Db::raw("'-'"), 5);
Db::reverse('name');
Db::padLeft('name', 8, ' ');
Db::padRight('name', 8, '.');
// Numbers
Db::ceil('price');
Db::floor('price');
Db::power('score', 2);
Db::sqrt('distance');
Db::exp(1);
Db::ln('value');
Db::log('value');
Db::trunc('price', 1);
// Dates
Db::addInterval('created_at', '1', 'DAY');
Db::subInterval('created_at', '2', 'HOUR');
// Aggregates
Db::groupConcat('name', ', ', true);use tommyknocker\pdodb\helpers\Db;
Db::null()                              // NULL
Db::isNull('column')                    // column IS NULL
Db::isNotNull('column')                 // column IS NOT NULL
Db::ifNull('column', 'default')         // IFNULL(column, 'default')
Db::coalesce('col1', 'col2', 'default') // COALESCE(col1, col2, 'default')
Db::nullIf('col1', 'col2')              // NULLIF(col1, col2)use tommyknocker\pdodb\helpers\Db;
Db::true()    // TRUE (1)
Db::false()   // FALSE (0)
Db::default() // DEFAULTuse tommyknocker\pdodb\helpers\Db;
Db::inc()              // age + 1
Db::dec(5)             // age - 5
Db::abs('column')      // ABS(column)
Db::round('column', 2) // ROUND(column, 2)
Db::mod('a', 'b')      // MOD(a, b) or a % buse tommyknocker\pdodb\helpers\Db;
Db::concat('first_name', ' ', 'last_name') // CONCAT(...)
Db::upper('name')                          // UPPER(name)
Db::lower('email')                         // LOWER(email)
Db::trim('text')                           // TRIM(text)
Db::ltrim('text')                          // LTRIM(text)
Db::rtrim('text')                          // RTRIM(text)
Db::length('text')                         // LENGTH(text)
Db::substring('text', 1, 5)                // SUBSTRING(text, 1, 5)
Db::replace('text', 'old', 'new')          // REPLACE(text, 'old', 'new')use tommyknocker\pdodb\helpers\Db;
Db::like('email', '%@example.com')      // email LIKE '%@example.com'
Db::ilike('name', 'john%')              // Case-insensitive LIKE
Db::not(Db::like('email', '%@spam.com')) // NOT LIKE
Db::between('age', 18, 65)              // age BETWEEN 18 AND 65
Db::notBetween('age', 0, 17)            // age NOT BETWEEN 0 AND 17
Db::in('id', [1, 2, 3])                 // id IN (1, 2, 3)
Db::notIn('status', ['deleted', 'banned']) // status NOT IN (...)use tommyknocker\pdodb\helpers\Db;
Db::case([
    ['age < 18', "'minor'"],
    ['age < 65', "'adult'"]
], "'senior'")
// CASE WHEN age < 18 THEN 'minor' WHEN age < 65 THEN 'adult' ELSE 'senior' ENDuse tommyknocker\pdodb\helpers\Db;
Db::now()                    // NOW() or CURRENT_TIMESTAMP
Db::now('1 DAY')             // NOW() + INTERVAL 1 DAY
Db::ts()                     // UNIX_TIMESTAMP()
Db::curDate()                // CURDATE()
Db::curTime()                // CURTIME()
Db::date('created_at')       // DATE(created_at)
Db::time('created_at')       // TIME(created_at)
Db::year('created_at')       // YEAR(created_at)
Db::month('created_at')      // MONTH(created_at)
Db::day('created_at')        // DAY(created_at)
Db::hour('created_at')       // HOUR(created_at)
Db::minute('created_at')     // MINUTE(created_at)
Db::second('created_at')     // SECOND(created_at)use tommyknocker\pdodb\helpers\Db;
Db::count()             // COUNT(*)
Db::count('DISTINCT id') // COUNT(DISTINCT id)
Db::sum('amount')       // SUM(amount)
Db::avg('rating')       // AVG(rating)
Db::min('price')        // MIN(price)
Db::max('price')        // MAX(price)use tommyknocker\pdodb\helpers\Db;
Db::cast('123', 'INTEGER')       // CAST('123' AS INTEGER)
Db::greatest('a', 'b', 'c')      // GREATEST(a, b, c)
Db::least('a', 'b', 'c')         // LEAST(a, b, c)use tommyknocker\pdodb\helpers\Db;
// Create JSON
Db::jsonObject(['key' => 'value'])     // '{"key":"value"}'
Db::jsonArray('a', 'b', 'c')           // '["a","b","c"]'
// Query JSON
Db::jsonPath('meta', ['age'], '>', 18)      // JSON path comparison
Db::jsonContains('tags', 'php')             // Check if contains value
Db::jsonContains('tags', ['php', 'mysql'])  // Check if contains all values
Db::jsonExists('meta', ['city'])            // Check if path exists
// Extract JSON
Db::jsonGet('meta', ['city'])               // Extract value at path
Db::jsonExtract('meta', ['city'])           // Alias for jsonGet
Db::jsonLength('tags')                      // Array/object length
Db::jsonKeys('meta')                        // Object keys
Db::jsonType('tags')                        // Value typeuse tommyknocker\pdodb\helpers\Db;
// Export to JSON
$data = $db->find()->from('users')->get();
$json = Db::toJson($data);
// Export to CSV
$csv = Db::toCsv($data);
// Export to XML
$xml = Db::toXml($data);
// Custom options
$json = Db::toJson($data, JSON_UNESCAPED_SLASHES);
$csv = Db::toCsv($data, ';');              // Semicolon delimiter
$xml = Db::toXml($data, 'users', 'user');   // Custom elementsuse tommyknocker\pdodb\helpers\Db;
// Full-text search (requires FTS indexes)
$results = $db->find()
    ->from('articles')
    ->where(Db::fulltextMatch('title, content', 'database tutorial'))
    ->get();
// Single column search
$results = $db->find()
    ->from('articles')
    ->where(Db::fulltextMatch('title', 'PHP'))
    ->get();// Get table structure
$structure = $db->describe('users');
// Get indexes via QueryBuilder
$indexes = $db->find()->from('users')->indexes();
// Get indexes via direct call
$indexes = $db->indexes('users');
// Get foreign keys via QueryBuilder
$foreignKeys = $db->find()->from('orders')->keys();
// Get foreign keys via direct call
$foreignKeys = $db->keys('orders');
// Get constraints via QueryBuilder
$constraints = $db->find()->from('users')->constraints();
// Get constraints via direct call
$constraints = $db->constraints('users');| Method | Description | 
|---|---|
| find() | Returns QueryBuilder instance | 
| rawQuery(string|RawValue, array) | Execute raw SQL, returns array of rows | 
| rawQueryOne(string|RawValue, array) | Execute raw SQL, returns first row | 
| rawQueryValue(string|RawValue, array) | Execute raw SQL, returns single value | 
| startTransaction() | Begin transaction | 
| commit() | Commit transaction | 
| rollBack() | Roll back transaction | 
| lock(array|string) | Lock tables | 
| unlock() | Unlock tables | 
| setLockMethod(string) | Set lock method (READ/WRITE) | 
| describe(string) | Get table structure | 
| indexes(string) | Get all indexes for a table | 
| keys(string) | Get foreign key constraints | 
| constraints(string) | Get all constraints (PK, UNIQUE, FK, CHECK) | 
| explain(string, array) | Analyze query execution plan | 
| explainAnalyze(string, array) | Analyze query with execution | 
| ping() | Check database connection | 
| disconnect() | Close connection | 
| setTimeout(int) | Set query timeout in seconds | 
| getTimeout() | Get current query timeout | 
| addConnection(name, config, options, logger) | Add connection to pool | 
| connection(name) | Switch to named connection | 
| Method | Description | 
|---|---|
| table(string)/from(string) | Set target table (supports schema.tableand aliases) | 
| prefix(string) | Set table prefix for this query | 
| select(array|string|RawValue) | Specify columns to select | 
| Method | Description | 
|---|---|
| where(...)/andWhere(...)/orWhere(...) | Add WHERE conditions | 
| whereIn(column, callable|QueryBuilder)/whereNotIn(column, callable|QueryBuilder) | Add WHERE IN/NOT IN with subquery | 
| whereExists(callable|QueryBuilder)/whereNotExists(callable|QueryBuilder) | Add WHERE EXISTS/NOT EXISTS with subquery | 
| where(column, QueryBuilder, operator) | Add WHERE condition with QueryBuilder subquery | 
| join(...)/leftJoin(...)/rightJoin(...)/innerJoin(...) | Add JOIN clauses | 
| groupBy(...) | Add GROUP BY clause | 
| having(...)/orHaving(...) | Add HAVING conditions | 
| Method | Description | 
|---|---|
| orderBy(string|array|RawValue, direction = 'ASC') | Add ORDER BY clause. Supports: single column, array of columns, comma-separated string | 
| limit(int) | Set LIMIT | 
| offset(int) | Set OFFSET | 
| option(string|array) | Add query options (e.g., DISTINCT, SQL_CALC_FOUND_ROWS) | 
| Method | Description | 
|---|---|
| insert(array) | Insert single row, returns inserted ID | 
| insertMulti(array) | Insert multiple rows, returns count | 
| update(array) | Update rows, returns affected count | 
| delete() | Delete rows, returns affected count | 
| truncate() | Truncate table | 
| replace(array)/replaceMulti(array) | MySQL REPLACE operations | 
| onDuplicate(array) | Build UPSERT clause (dialect-specific) | 
| Method | Description | 
|---|---|
| loadCsv(file, options) | CSV loader (uses COPY/LOAD DATA when available) | 
| loadXml(file, options) | XML loader | 
| loadJson(file, options) | JSON loader (supports array and NDJSON formats) | 
| Method | Description | 
|---|---|
| get() | Execute SELECT, return all rows | 
| getOne() | Execute SELECT, return first row | 
| getColumn() | Execute SELECT, return single column values | 
| getValue() | Execute SELECT, return single value | 
| exists() | Check if any rows match conditions | 
| notExists() | Check if no rows match conditions | 
| tableExists(string) | Check if table exists | 
| Method | Description | 
|---|---|
| batch(int $batchSize = 100) | Process data in batches using Generator | 
| each(int $batchSize = 100) | Process one record at a time using Generator | 
| cursor() | Stream results with minimal memory usage using Generator | 
| Method | Description | 
|---|---|
| explain() | Execute EXPLAIN query to analyze execution plan | 
| explainAnalyze() | Execute EXPLAIN ANALYZE (PostgreSQL) or EXPLAIN FORMAT=JSON (MySQL) | 
| describe() | Execute DESCRIBE to get table structure | 
| toSQL() | Convert query to SQL string and parameters | 
| Method | Description | 
|---|---|
| selectJson(col, path, alias, asText) | Select JSON column or path | 
| whereJsonPath(col, path, operator, value, cond) | Add JSON path condition | 
| whereJsonContains(col, value, path, cond) | Add JSON contains condition | 
| whereJsonExists(col, path, cond) | Add JSON path existence condition | 
| jsonSet(col, path, value) | Set JSON value | 
| jsonRemove(col, path) | Remove JSON path | 
| orderByJson(col, path, direction) | Order by JSON path | 
| Method | Description | 
|---|---|
| asObject() | Set fetch mode to objects instead of arrays | 
PDOdb handles most differences automatically, but here are some key points:
- MySQL: Backticks `column`
- PostgreSQL: Double quotes "column"
- SQLite: Double quotes "column"
Automatically handled by the library.
- MySQL: ON DUPLICATE KEY UPDATE
- PostgreSQL: ON CONFLICT ... DO UPDATE SET
- SQLite: ON CONFLICT ... DO UPDATE SET
Use onDuplicate() for portable UPSERT:
$db->find()->table('users')->onDuplicate([
    'age' => Db::inc()
])->insert(['email' => 'user@example.com', 'age' => 25]);- MySQL: Native REPLACEstatement
- PostgreSQL: Emulated via UPSERT
- SQLite: Native REPLACEstatement
$db->find()->table('users')->replace(['id' => 1, 'name' => 'Alice']);- MySQL/PostgreSQL: Native TRUNCATE TABLE
- SQLite: Emulated via DELETE FROM+ reset AUTOINCREMENT
$db->find()->table('users')->truncate();- MySQL: LOCK TABLES ... READ/WRITE
- PostgreSQL: LOCK TABLE ... IN ... MODE
- SQLite: BEGIN IMMEDIATE
$db->lock(['users'])->setLockMethod('WRITE');- MySQL: Uses JSON_EXTRACT,JSON_CONTAINS, etc.
- PostgreSQL: Uses ->,->>,@>operators
- SQLite: Uses json_extract,json_each, etc.
All handled transparently through Db::json*() helpers.
- MySQL: LOAD DATA [LOCAL] INFILE
- PostgreSQL: COPY FROM
- SQLite: Row-by-row inserts in a transaction
$db->find()->table('users')->loadCsv('/path/to/file.csv');All dialects support efficient multi-row inserts. The library generates unique placeholders (:name_0, :name_1) to avoid PDO binding conflicts:
$db->find()->table('users')->insertMulti([
    ['name' => 'Alice', 'age' => 30],
    ['name' => 'Bob', 'age' => 25]
]);Problem: PDO extension not installed.
Solution: Install the required PHP extension:
# Ubuntu/Debian
sudo apt-get install php8.4-mysql php8.4-pgsql php8.4-sqlite3
# macOS
brew install php
# Check installed extensions
php -m | grep pdoProblem: SQLite compiled without JSON1 extension.
Solution: Check if JSON support is available:
sqlite3 :memory: "SELECT json_valid('{}')"If it returns an error, you need to recompile SQLite with JSON1 or use a pre-built version with JSON support.
Problem: Using OFFSET without LIMIT in SQLite.
Solution: Always use LIMIT with OFFSET in SQLite:
// ❌ Doesn't work in SQLite
$db->find()->from('users')->offset(10)->get();
// ✅ Works
$db->find()->from('users')->limit(20)->offset(10)->get();Problem: JSON operations can be slow on large datasets without indexes.
Solutions:
- 
Add indexes (MySQL 5.7+): ALTER TABLE users ADD COLUMN meta_age INT AS (JSON_EXTRACT(meta, '$.age')); CREATE INDEX idx_meta_age ON users(meta_age); 
- 
Denormalize frequently accessed fields: ALTER TABLE users ADD COLUMN age INT; -- Copy from JSON UPDATE users SET age = JSON_EXTRACT(meta, '$.age'); 
- 
Use virtual columns with indexes (PostgreSQL): CREATE INDEX idx_meta_age ON users((meta->>'age')); 
Problem: Connection pool not properly managed.
Solution: Reuse connections and disconnect when done:
// ✅ Good: Reuse connection
$db->addConnection('main', $config);
$users = $db->connection('main')->find()->from('users')->get();
$orders = $db->connection('main')->find()->from('orders')->get();
// Disconnect when completely done
$db->disconnect();Problem: Loading millions of rows into memory.
Solution: Use LIMIT or process in chunks:
// Process in chunks
$offset = 0;
$limit = 1000;
while (true) {
    $users = $db->find()
        ->from('users')
        ->limit($limit)
        ->offset($offset)
        ->get();
    
    if (empty($users)) break;
    
    // Process $users...
    
    $offset += $limit;
}The project includes comprehensive PHPUnit tests for MySQL, PostgreSQL, and SQLite.
# Run all tests
./vendor/bin/phpunit
# Run specific dialect tests
./vendor/bin/phpunit tests/PdoDbMySQLTest.php
./vendor/bin/phpunit tests/PdoDbPostgreSQLTest.php
./vendor/bin/phpunit tests/PdoDbSqliteTest.php
# Run with coverage
./vendor/bin/phpunit --coverage-html coverage- MySQL: Running instance on localhost:3306
- PostgreSQL: Running instance on localhost:5432
- SQLite: No setup required (uses :memory:)
Tests are designed to run in containers or against local instances. Recommended CI workflow:
# GitHub Actions example
- name: Run tests
  run: ./vendor/bin/phpunit
  env:
    MYSQL_HOST: 127.0.0.1
    MYSQL_PORT: 3306
    PGSQL_HOST: 127.0.0.1
    PGSQL_PORT: 5432The library provides standardized error codes for all supported database dialects through the DbError class:
use tommyknocker\pdodb\helpers\DbError;
// MySQL error codes
DbError::MYSQL_CONNECTION_LOST        // 2006
DbError::MYSQL_CANNOT_CONNECT         // 2002
DbError::MYSQL_CONNECTION_KILLED      // 2013   
DbError::MYSQL_DUPLICATE_KEY          // 1062
DbError::MYSQL_TABLE_EXISTS           // 1050
// PostgreSQL error codes (SQLSTATE)
DbError::POSTGRESQL_CONNECTION_FAILURE        // '08006'
DbError::POSTGRESQL_CONNECTION_DOES_NOT_EXIST  // '08003'
DbError::POSTGRESQL_UNIQUE_VIOLATION          // '23505'
DbError::POSTGRESQL_UNDEFINED_TABLE           // '42P01'
// SQLite error codes
DbError::SQLITE_ERROR      // 1
DbError::SQLITE_BUSY       // 5
DbError::SQLITE_LOCKED     // 6
DbError::SQLITE_CONSTRAINT // 19
DbError::SQLITE_ROW        // 100
DbError::SQLITE_DONE       // 101// Get retryable error codes for specific driver
$mysqlErrors = DbError::getMysqlRetryableErrors();
$postgresqlErrors = DbError::getPostgresqlRetryableErrors();
$sqliteErrors = DbError::getSqliteRetryableErrors();
// Get retryable errors for any driver
$errors = DbError::getRetryableErrors('mysql');
// Check if error is retryable
$isRetryable = DbError::isRetryable(2006, 'mysql'); // true
// Get human-readable error description
$description = DbError::getDescription(2006, 'mysql');
// Returns: "MySQL server has gone away"use tommyknocker\pdodb\helpers\DbError;
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'test',
    'retry' => [
        'enabled' => true,
        'max_attempts' => 3,
        'delay_ms' => 1000,
        'retryable_errors' => DbError::getRetryableErrors('mysql'), // Use helper method
    ]
]);
// Or specify individual error codes
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'test',
    'retry' => [
        'enabled' => true,
        'retryable_errors' => [
            DbError::MYSQL_CONNECTION_LOST,
            DbError::MYSQL_CANNOT_CONNECT,
            DbError::MYSQL_CONNECTION_KILLED,
        ]
    ]
]);The retry mechanism includes comprehensive validation to prevent invalid configurations:
// Valid configuration
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'test',
    'retry' => [
        'enabled' => true,
        'max_attempts' => 3,        // Must be 1-100
        'delay_ms' => 1000,         // Must be 0-300000ms (5 minutes)
        'backoff_multiplier' => 2.0, // Must be 1.0-10.0
        'max_delay_ms' => 10000,    // Must be >= delay_ms, max 300000ms
        'retryable_errors' => [2006, '08006'] // Must be array of int/string
    ]
]);
// Invalid configurations will throw InvalidArgumentException:
try {
    $db = new PdoDb('mysql', [
        'retry' => [
            'enabled' => 'true',     // ❌ Must be boolean
            'max_attempts' => 0,     // ❌ Must be >= 1
            'delay_ms' => -100,      // ❌ Must be >= 0
            'backoff_multiplier' => 0.5, // ❌ Must be >= 1.0
            'max_delay_ms' => 500,   // ❌ Must be >= delay_ms
        ]
    ]);
} catch (InvalidArgumentException $e) {
    echo "Configuration error: " . $e->getMessage();
}Validation Rules:
- enabled: Must be boolean
- max_attempts: Must be integer 1-100
- delay_ms: Must be integer 0-300000ms (5 minutes)
- backoff_multiplier: Must be number 1.0-10.0
- max_delay_ms: Must be integer 0-300000ms, >= delay_ms
- retryable_errors: Must be array of integers or strings
The retry mechanism provides comprehensive logging for monitoring connection attempts in production:
use Monolog\Handler\TestHandler;
use Monolog\Logger;
// Create a logger (e.g., Monolog)
$testHandler = new TestHandler();
$logger = new Logger('database');
$logger->pushHandler($testHandler);
$db = new PdoDb('mysql', [
    'host' => 'localhost',
    'dbname' => 'test',
    'retry' => [
        'enabled' => true,
        'max_attempts' => 3,
        'delay_ms' => 1000,
    ]
], [], $logger);
// Set logger on the connection
$connection = $db->connection;
if ($connection instanceof \tommyknocker\pdodb\connection\RetryableConnection) {
    $reflection = new \ReflectionClass($connection);
    $loggerProperty = $reflection->getProperty('logger');
    $loggerProperty->setAccessible(true);
    $loggerProperty->setValue($connection, $logger);
}
// Execute queries - logs will be captured
$result = $db->connection->query('SELECT 1 as test');
// Access captured logs
$records = $testHandler->getRecords();
foreach ($records as $record) {
    echo "[{$record['level_name']}] {$record['message']}\n";
}Log Messages:
- connection.retry.start- Retry operation begins
- connection.retry.attempt- Individual attempt starts
- connection.retry.success- Successful operation
- connection.retry.attempt_failed- Attempt failed (retryable)
- connection.retry.not_retryable- Error not in retryable list
- connection.retry.exhausted- All retry attempts failed
- connection.retry.retrying- Decision to retry
- connection.retry.wait- Wait delay calculation details
Log Context Includes:
- Method name (query,execute,prepare,transaction)
- Attempt number and max attempts
- Error codes and messages
- Driver information
- Delay calculations and backoff details
Contributions are welcome! Please follow these guidelines:
- Open an issue first for new features or bug reports
- Include failing tests that demonstrate the problem
- Provide details:
- Expected SQL vs. actual SQL
- Environment details (PHP version, database version, driver)
- Steps to reproduce
 
- Follow PSR-12 coding standards
- Write tests for all new functionality
- Test against all three dialects (MySQL, PostgreSQL, SQLite)
- Fork the repository
- Create a feature branch (git checkout -b feature/amazing-feature)
- Commit your changes (git commit -m 'Add amazing feature')
- Push to the branch (git push origin feature/amazing-feature)
- Open a Pull Request
This project is open source. See LICENSE file for details.
Inspired by ThingEngineer/PHP-MySQLi-Database-Class
Built with ❤️ for the PHP community.