-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Summary
In insert_cached() (batch.rs), UNIQUE/PK indexes acquire the write lock twice per insert:
- During constraint validation (lines 295-326)
- During index insertion (lines 363-399)
Current Behavior
// Loop 1: Constraint validation (UNIQUE/PK indexes only)
for index_plan in &plan.indexes {
if !index_plan.is_unique && !index_plan.is_pk { continue; }
let index_storage_arc = self.get_or_load_index_storage(...)?;
let mut index_storage_guard = index_storage_arc.write(); // LOCK #1
// ... validate constraint
} // Lock released
// Main table insert happens here...
// Loop 2: Index insertion (ALL indexes)
for index_plan in &plan.indexes {
let index_storage_arc = self.get_or_load_index_storage(...)?;
let mut index_storage_guard = index_storage_arc.write(); // LOCK #2 (again for UNIQUE/PK)
// ... insert into index
}Why This Happens
The structure is necessary for correctness:
- Must validate ALL constraints before generating row_id (avoid wasting IDs on failed inserts)
- Must have row_id before inserting into indexes (index entries point to row_id)
- Can't combine loops without risking partial index insertion on constraint failure
Potential Optimization
Cache the Arc<RwLock> and possibly lock guards from loop 1 in a SmallVec, then reuse in loop 2 for UNIQUE/PK indexes. This would reduce lock acquisitions for tables with UNIQUE/PK constraints.
Impact
- Sub-optimal performance under concurrent load
- More significant for tables with multiple UNIQUE/PK indexes
- The
Arcis already cached via weak reference, so file reopening is avoided
Priority
Low - The correctness is maintained, this is purely a performance optimization.
Files
src/database/batch.rs:282-399
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working