You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Complete Architectural Redesign: ATOM now employs a three-module parallel pipeline for DTKG construction and updates.
Atomic Fact Decomposition: A new first module splits text into minimal "atomic facts," addressing the "forgetting effect" where LLMs omit facts in longer contexts.
Enhanced Exhaustivity and Stability: The new architecture achieves significant gains: ~31% in factual exhaustivity, ~18% in temporal exhaustivity, and ~17% in stability.
Dual-Time Modeling: Implemented dual-time modeling (t_obs vs. t_start/t_end) to prevent temporal misattribution in dynamic KGs.
Parallel 5-Tuple Extraction: Module-2 now directly extracts 5-tuples (subject, predicate, object, t_start, t_end) in parallel from atomic facts.
Parallel Atomic Merge Architecture: Module-3 uses an efficient, parallel pairwise merge algorithm, achieving 93.8% latency reduction vs. Graphiti and 95.3% vs. iText2KG.
LLM-Independent Resolution: Replaced slow LLM-based resolution with distance metrics (cosine similarity) for scalable, parallel merging.