Skip to content

feat: Add Top-K and min co-occurrence filters to NLP edge extraction#2273

Open
nahisaho wants to merge 1 commit intomicrosoft:mainfrom
nahisaho:fix/noun-graph-topk-cooccurrence-filter
Open

feat: Add Top-K and min co-occurrence filters to NLP edge extraction#2273
nahisaho wants to merge 1 commit intomicrosoft:mainfrom
nahisaho:fix/noun-graph-topk-cooccurrence-filter

Conversation

@nahisaho
Copy link

@nahisaho nahisaho commented Mar 8, 2026

Summary

The NLP-based graph extraction in build_noun_graph.py uses itertools.combinations() to create co-occurrence edges between all noun phrases in each text chunk. With entity-dense corpora (e.g., scientific/technical text), this O(N²) all-pairs algorithm produces a massive number of edges that overwhelm downstream processing and paradoxically make the "fast/lazy" NLP mode more expensive than the LLM-based "standard" mode.

Problem

Root Cause

_extract_edges() calls combinations(sorted(set(titles)), 2) on every text unit. With scientific text averaging ~60 noun-phrase entities per 800-token chunk, this produces C(60,2) = 1,770 pairs per chunk.

Impact (measured on a 20-paper materials-science corpus, 153 chunks)

Mode Entities Relationships LLM Cost (gpt-4o-mini)
Standard (LLM) 605 1,858 $0.600
Fast/Lazy (NLP) 1,147 120,287 $0.929

The NLP mode generates 65× more relationships than LLM extraction, causing the "fast" pipeline to be 55% more expensive than Standard despite using no LLM for entity extraction itself. The cost comes from downstream community_reports generation, which must summarize the inflated graph.

Why prune_graph does not fix this

The existing prune_graph workflow step applies PMI-based pruning, but:

  • PMI normalization makes edge weights very uniform (range 0.000006–0.000124 for 120K edges)
  • Moderate pruning (min_edge_weight_pct=40) still leaves ~72K edges
  • Aggressive pruning (min_edge_weight_pct=80, max_node_degree_std=2.0) over-prunes to just 6 edges

The problem must be addressed at the source — during edge construction, not after.

Solution

This PR adds two configurable parameters to _extract_edges():

1. max_entities_per_chunk (default: 0 = disabled)

When > 0, only the K most globally-frequent entities per text chunk are paired, capping edges at C(K,2) instead of C(N,2).

# settings.yaml
extract_graph_nlp:
  max_entities_per_chunk: 15  # C(15,2) = 105 max pairs/chunk

2. min_co_occurrence (default: 1 = no filtering)

When > 1, edges appearing in fewer text units are discarded as likely coincidental co-occurrences. In testing, ~57.5% of edges appeared in only 1 chunk.

extract_graph_nlp:
  min_co_occurrence: 2

Why these defaults?

Both defaults preserve exact backward compatibility — existing users see zero behavioral change. Users experiencing the edge explosion can opt in by setting these parameters in settings.yaml.

Results

Tested with 20 materials-science papers (153 text chunks), gpt-4o-mini + text-embedding-3-small:

With max_entities_per_chunk=15, min_co_occurrence=2

Metric Before (NLP) After (NLP) Standard (LLM)
Relationships 120,287 2,660 1,858
Total cost $0.929 $0.097 $0.600
Reduction 97.8%

Top-K parameter sweep

K Relationships Cost Query Quality
10 1,060 $0.072 ★☆☆ (too sparse)
15 2,660 $0.097 ★★★ (best)
20 5,108 $0.149 ★★★
30 12,172 $0.270 ★★★

K=15 emerged as the Pareto optimal — best query quality at the lowest cost, outperforming even Standard mode on a local search benchmark.

Changes

File Change
config/defaults.py Add max_entities_per_chunk=0, min_co_occurrence=1 to ExtractGraphNLPDefaults
config/models/extract_graph_nlp_config.py Add corresponding Pydantic fields to ExtractGraphNLPConfig
index/workflows/extract_graph_nlp.py Thread new parameters from config → build_noun_graph()
index/operations/build_noun_graph/build_noun_graph.py Implement Top-K selection and co-occurrence filtering in _extract_edges()
tests/unit/config/utils.py Add assertions for new config fields
tests/unit/indexing/operations/test_build_noun_graph.py New: 15 unit tests covering filtering logic, edge cases, backward compatibility
.semversioner/next-release/ Patch change document

Backward Compatibility

  • Default behavior unchangedmax_entities_per_chunk=0 disables Top-K, min_co_occurrence=1 keeps all edges
  • Existing integration test (test_extract_graph_nlp) continues to pass with 1,147 entities and 29,442 relationships
  • No breaking API changes — new parameters are keyword-only with defaults
  • No new dependencies

The NLP-based graph extraction (_extract_edges in build_noun_graph.py)
uses itertools.combinations() to create co-occurrence edges between all
noun phrases in each text chunk. With entity-dense corpora (e.g.
scientific text averaging ~60 entities per 800-token chunk), this
produces C(60,2)=1,770 pairs per chunk — leading to 65-70x more
relationships than LLM-based extraction and paradoxically making the
'fast/lazy' NLP mode more expensive than 'standard' LLM mode.

This commit adds two configurable filters to _extract_edges():

1. max_entities_per_chunk (default: 0 = disabled):
   When > 0, only the K most globally-frequent entities per text chunk
   are paired, capping edges at C(K,2) instead of C(N,2). With K=15
   on a 20-paper materials-science corpus, this reduced relationships
   from 120,287 to 2,660 (97.8% reduction) while improving query
   quality compared to Standard mode.

2. min_co_occurrence (default: 1 = no filtering):
   When > 1, edges appearing in fewer text units are discarded as
   likely coincidental co-occurrences. In testing, ~57.5% of edges
   appeared in only 1 chunk.

Both parameters are exposed through settings.yaml via
extract_graph_nlp.max_entities_per_chunk and
extract_graph_nlp.min_co_occurrence, with backward-compatible defaults
that preserve existing behavior.

Includes:
- Config: defaults.py, extract_graph_nlp_config.py
- Pipeline: extract_graph_nlp.py workflow, build_noun_graph.py
- Tests: 15 unit tests for filtering logic
- Semversioner: patch change document

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@nahisaho nahisaho requested a review from a team as a code owner March 8, 2026 08:25
@microsoft-github-policy-service

@nahisaho please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

nahisaho pushed a commit to nahisaho/graphrag-hybrid-installer that referenced this pull request Mar 8, 2026
Features:
- Interactive installer for Microsoft GraphRAG with hybrid NLP extraction
- scispaCy + GiNZA + domain dictionary integration
- NLP edge optimization patch for build_noun_graph.py (Top-K + co-occurrence filter)
- Multi-provider support: OpenAI / Azure OpenAI / Ollama
- MCP Server for Claude Desktop / VS Code Copilot integration
- Domain dictionary builder for specialized corpora

The NLP edge optimization patch addresses the O(N²) relationship explosion
in GraphRAG v3.0.6's Lazy/Fast mode, reducing relationships by 97.8% and
costs by 89.6% while maintaining query quality.

See also: microsoft/graphrag#2273

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants