Skip to content

Conversation

@enp1s0
Copy link
Member

@enp1s0 enp1s0 commented Nov 10, 2025

This PR introduces a random orthogonal transformation as a preprocessor for CAGRA-Q, etc. By using this preprocessor, we can achieve a higher recall in the CAGRA-Q search depending on the datasets. This implementation generates an orthogonal matrix through the QR decomposition of a random matrix and then multiplies it by the dataset matrix.

figure-recall

@enp1s0 enp1s0 requested review from a team as code owners November 10, 2025 14:57
@enp1s0 enp1s0 self-assigned this Nov 10, 2025
@enp1s0 enp1s0 added improvement Improves an existing functionality non-breaking Introduces a non-breaking change labels Nov 10, 2025
@enp1s0 enp1s0 changed the title [WIP] random orthogonal transformation preprocess random orthogonal transformation preprocess Nov 10, 2025
@enp1s0 enp1s0 requested a review from a team as a code owner November 11, 2025 02:13
@enp1s0 enp1s0 added feature request New feature or request and removed improvement Improves an existing functionality labels Nov 12, 2025
enp1s0 and others added 8 commits November 12, 2025 16:25
Add the aggregate reporting of NVTX ranges in the output of benchmark executable.

### Usage
```bash
# Measure the CPU and GPU runtime of all NVTX ranges
nsys launch --trace=cuda,nvtx <ANN_BENCH with arguments>
# Measure only the CPU runtime of all NVTX ranges
nsys launch --trace=nvtx <ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges
<ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges within benchmark, but use nsys profiling as usual
nsys profile ... <ANN_BENCH with arguments>
```

### Implementation

The PR adds a single module `nvtx_stats.hpp` to the benchmark executable; there are no changes to the library at all.
The program leverages NVIDIA Nsight Systems CLI to collect and export NVTX statistics and then SQLite API to aggregate it into the benchmark state:

  1. Detect if run via `nsys launch`; if so, call `nsys start` / `nsys stop` around benchmark loop; otherwise do nothing.
  2. If the report is generated, read it and query all NVTX events and the GPU correlation data using SQLite
  3. Aggregate the NVTX events by their short names (without arguments to reduce the number of columns)
  4. Add them to the benchmark performance counters with the same averaging strategy as the global CPU/GPU runtime.

### Performance cost
If the benchmark is **not** run using `nsys launch`, there's virtually zero overhead in the new functionality.
Otherwise, there are overheads:
  1. Usual nsys profiling overheads (minimized by disabling unused information via `nsys start` CLI internally). This affects the reported performance the same way as normal nsys profiling does (especially if cuda tracing is enabled).
  2. One or more data collection/exporting events per benchmark case. These add some extra time to the benchmark time, but do not affect the counters (they are not the part of the benchmark loop)
 
Closes rapidsai#1367

Authors:
  - Artem M. Chirkin (https://github.com/achirkin)

Approvers:
  - Tamas Bela Feher (https://github.com/tfeher)

URL: rapidsai#1529
When converting from a DLManagedTensor to a mdspan in our c-api, we weren't checking the stride information on the dlmanaged tensor is the c-api. This caused invalid results when passing a strided matrix to functions like cuvsCagraBuild. Fix and add a unittest.

Authors:
  - Ben Frederickson (https://github.com/benfred)
  - Corey J. Nolet (https://github.com/cjnolet)

Approvers:
  - Corey J. Nolet (https://github.com/cjnolet)

URL: rapidsai#1458
…ai#1535)

This PR supports handling the new main branch strategy outlined below:

* [RSN 47 - Changes to RAPIDS branching strategy in 25.12](https://docs.rapids.ai/notices/rsn0047/)

The `update-version.sh` script should now supports two modes controlled via  `CLI` params or `ENV` vars:

CLI arguments: `--run-context=main|release`
ENV var `RAPIDS_RUN_CONTEXT=main|release`

xref: rapidsai/build-planning#224

Authors:
  - Nate Rock (https://github.com/rockhowse)

Approvers:
  - Jake Awe (https://github.com/AyodeAwe)
  - Corey J. Nolet (https://github.com/cjnolet)
  - MithunR (https://github.com/mythrocks)

URL: rapidsai#1535
This PR introduces **Augmented Core Extraction (ACE)**, an approach proposed by @anaruse for building CAGRA indices on very large datasets that exceed GPU memory capacity. ACE enables users to build high-quality approximate nearest neighbor search indices on datasets that would otherwise be impossible to process on a single GPU. The approach uses the host memory if large enough and falls back to the disk if required.

This work is a collaboration: @anaruse, @tfeher, @achirkin, @mfoerste4 

## Algorithm Description

1. **Dataset Partitioning**: The dataset is partitioned using balanced k-means clustering on sampled data. Each vector is assigned to its two closest partition centroids (primary and augmented). The primary partitions are non-overlapping. The augmentation ensures that cross-partition edges are captured in the final graph. Partitions smaller than a minimum threshold are automatically merged with larger partitions to ensure computational efficiency and graph quality. Vectors from small partitions are reassigned to the nearest valid partitions.
2. **Per-Partition Graph Building**: For each partition, a sub-index is built independently (regular `build_knn_graph()` flow) with its primary vectors plus augmented vectors from neighboring partitions.
3. **Graph Combining**: The per-partition graphs are combined into a single unified CAGRA index. Merging is not needed since the primary partitions are non-overlapping. The in-memory variant remaps the local partition IDs to global dataset IDs to create a correct index. The disk variant stores the backward index mappings (`dataset_mapping.bin`), the reordered dataset (`reordered_dataset.bin`) and the optimized CAGRA graph (`cagra_graph.bin`) on disk. The index is then incomplete as show by `cuvs::neighbors::index::on_disk()`. The files are stored in `cuvs::neighbors::index::file_directory()`. The HNSW index serialization was provided by @mfoerste4 in rapidsai#1410, which was merged here. This adds the `serialize_to_hnsw()` serialization routine that allows combination of dataset, graph, and mapping. The data will be combined on-the-fly while streamed from disk to disk while trying to minimize the required host memory. The host needs enough memory to hold the index though.

##  Core Components

- **`ace_build()`**: Main routine which users should call.
- **`ace_get_partition_labels()`**: Performs balanced k-means clustering to assign each vector to two closest partitions while handling small partition merging.
- **`ace_create_forward_and_backward_lists()`**: Creates bidirectional ID mappings between original dataset indices and reordered partition-local indices.
- **`ace_set_index_params()`**: Set the index parameters based on the partition and augmented dataset to ensure an efficient KNN graph building.
- **`ace_gather_partition_dataset()`**: In-memory only: gather the partition and augmented dataset.
- **`ace_adjust_sub_graph_ids`**: In-memory only: Adjust ids in sub search graph and store them into the main search graph.
- **`ace_adjust_final_graph_ids`**: In-memory only: Map graph neighbor IDs from reordered space back to original vector IDs.
- **`ace_reorder_and_store_dataset`**: Disk only: Reorder the dataset based on partitions and store to disk. Uses write buffers to improve performance.
- **`ace_load_partition_dataset_from_disk`**: Disk only: Load partition dataset and augmented dataset from disk.
- **`file_descriptor` and `ace_read_large_file()` / `ace_write_large_file()`**: RAII file handle and chunked file I/O operations.
- **CAGRA index changes**: Added `on_disk_` flag and `file_directory_` to the CAGRA index structure to support disk-backed indices.
- **CAGRA parameter changes**: Added `ace_npartitions` and `ace_build_dir` to the CAGRA parameters for users to specify that ACE should be used and which directory should be used if required.

## Usage

### C++ API

```cpp
#include <cuvs/neighbors/cagra.hpp>

using namespace cuvs::neighbors;

// Configure index parameters
cagra::index_params params;
params.ace_npartitions = 10;  // Number of partitions (unset or <= 1 to disable ACE)
params.ace_build_dir = "/tmp/ace_build";  // Directory for intermediate files (should be a fast NVMe)
params.graph_degree = 64;
params.intermediate_graph_degree = 128;

// Build ACE index (dataset can be on host memory)
auto dataset = raft::make_host_matrix<float, int64_t>(n_rows, n_cols);
// ... load dataset ...

auto index = cagra::build_ace(res, params, dataset.view(), params.ace_npartitions);

// Search works identically to standard CAGRA if the host has enough memory (index.on_disk() == false)
cagra::search_params search_params;
auto neighbors = raft::make_device_matrix<uint32_t>(res, n_queries, k);
auto distances = raft::make_device_matrix<float>(res, n_queries, k);
cagra::search(res, search_params, index, queries, neighbors.view(), distances.view());
```

### Storage Requirements
1. `cagra_graph.bin`: `n_rows * graph_degree * sizeof(IdxT)`
2. `dataset_mapping.bin`: `n_rows * sizeof(IdxT)`
2. `reordered_dataset.bin`: Size of the input dataset
3. `augmented_dataset.bin`: Size of the input dataset

Authors:
  - Julian Miller (https://github.com/julianmi)
  - Anupam (https://github.com/aamijar)
  - Tarang Jain (https://github.com/tarang-jain)
  - Malte Förster (https://github.com/mfoerste4)
  - Jake Awe (https://github.com/AyodeAwe)
  - Bradley Dice (https://github.com/bdice)
  - Artem M. Chirkin (https://github.com/achirkin)
  - Jinsol Park (https://github.com/jinsolp)

Approvers:
  - MithunR (https://github.com/mythrocks)
  - Robert Maynard (https://github.com/robertmaynard)
  - Tamas Bela Feher (https://github.com/tfeher)
  - Corey J. Nolet (https://github.com/cjnolet)

URL: rapidsai#1404
…i#1538)

This updates RMM memory resource includes to use the header path `<rmm/mr/*>` instead of `<rmm/mr/device/*>`.

xref: rapidsai/rmm#2141

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Divye Gala (https://github.com/divyegala)
  - Corey J. Nolet (https://github.com/cjnolet)

URL: rapidsai#1538
Adds new `rocky8-clib-standalone-build` and  `rocky8-clib-tests` PR jobs that validate that the C api binaries can be built and run all C tests correctly.

Also adds a new nightly build job that produces the C api binaries.

Authors:
  - Robert Maynard (https://github.com/robertmaynard)
  - Ben Frederickson (https://github.com/benfred)

Approvers:
  - Jake Awe (https://github.com/AyodeAwe)
  - Bradley Dice (https://github.com/bdice)

URL: rapidsai#1524
@enp1s0 enp1s0 requested review from a team as code owners November 16, 2025 15:40
@enp1s0 enp1s0 requested a review from bdice November 16, 2025 15:40
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@cjnolet cjnolet changed the base branch from main to release/25.12 November 17, 2025 18:22
@cjnolet
Copy link
Member

cjnolet commented Nov 17, 2025

@enp1s0 this looks like a good capability to have in cuVS. The recall improvements in your benchmarks look good, but this is also targeting CAGRA-Q improvement, which I think we would want to also support at scale. Have you evaluated this approach yet at scale? Do you have a sense for the limits that can be achieved with this approach (in terms of max dataset size on a single GPU, impact to end-to-end GPU build perf)? Another option could be to use random projections, but that approach also has a limitation to scale (optimal number of dimensions increases with number of vectors to preserve quality).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Development

Successfully merging this pull request may close these issues.

9 participants