Skip to content

Conversation

@MorganaFuture
Copy link
Contributor

@MorganaFuture MorganaFuture commented Jan 8, 2026

Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings.

When a non-Latin query comes in, we convert it to a 384-dim vector and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" with 97% similarity.

Key changes:

  • New internal/embeddings package with ONNX model integration
  • Hybrid search: embeddings for non-Latin, Jaro-Winkler for Latin
  • Build with -tags embeddings to enable

Adds support for matching names across different writing systems
(Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings.

When a non-Latin query comes in, we convert it to a 384-dim vector
and find similar vectors in the index. This lets us match
"محمد علي" to "Mohamed Ali" with 97% similarity.

Key changes:
- New internal/embeddings package with ONNX model integration
- Hybrid search: embeddings for non-Latin, Jaro-Winkler for Latin
- LRU cache for embeddings (~5µs after warm-up)
- Build with -tags embeddings to enable
@gemini-code-assist
Copy link

Summary of Changes

Hello @MorganaFuture, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the search capabilities by introducing neural embeddings to enable cross-script name matching. This allows the system to semantically understand and match names written in different languages and scripts, such as Arabic or Cyrillic, against Latin-script names in sanctions lists. The implementation uses a hybrid approach, leveraging embeddings for non-Latin queries and retaining existing string matching for Latin queries, ensuring both accuracy for diverse inputs and performance efficiency. This feature aims to drastically improve the precision and recall for international name screening.

Highlights

  • Cross-Script Name Matching: Introduced support for matching names across different writing systems (e.g., Arabic, Cyrillic, Chinese to Latin) using neural embeddings, enabling matches like 'محمد علي' to 'Mohamed Ali' with high similarity.
  • New Embeddings Package: Added a new internal/embeddings package which integrates ONNX models for generating 384-dimensional multilingual vectors, handling model loading, inference, and L2 normalization.
  • Hybrid Search Approach: Implemented a hybrid search strategy where non-Latin queries utilize the new embedding-based search, while Latin queries continue to use the faster Jaro-Winkler algorithm for efficiency.
  • Performance Optimizations: Incorporated an LRU cache for embedding vectors to significantly reduce inference latency for repeated queries (down to ~5µs after warm-up) and optimized batch processing for encoding multiple texts.
  • Configurable and Opt-in Feature: The embeddings feature is opt-in, requiring a -tags embeddings build flag and configuration via environment variables or YAML, allowing users to enable it based on their needs.
  • Comprehensive Documentation and Testing: Provided detailed documentation on setup, configuration, API usage, and limitations, alongside extensive unit, integration, accuracy, and performance benchmarks for the new embeddings functionality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and well-implemented feature for cross-script name matching using neural embeddings. The changes are comprehensive, covering the core embedding logic, integration with the search service, configuration, extensive testing, and excellent documentation. The use of build tags to make this an optional feature is a great approach. My feedback focuses on a few areas to improve performance and code clarity, particularly around caching and data lookups.

Comment on lines +221 to +222
embeddings, err := s.model.encode(batch)
if err != nil {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The BuildIndex function calls s.model.encode directly, which bypasses the service-level caching implemented in EncodeBatch. If the list of names contains duplicates, they will be re-encoded unnecessarily, which is inefficient.

You should use s.EncodeBatch here to take advantage of the caching layer. This will improve performance, especially during index rebuilds where many names might be repeated from the previous index.

Suggested change
embeddings, err := s.model.encode(batch)
if err != nil {
embeddings, err := s.EncodeBatch(ctx, batch)

Comment on lines +142 to +145
entityMap := make(map[string]search.Entity[search.Value], len(searchEntities))
for _, e := range searchEntities {
entityMap[e.SourceID] = e
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This part of the code builds a map of all entities on every embedding search. For large datasets, this can be a significant performance bottleneck as it involves iterating over all entities and allocating a new map for each search request.

To improve performance, this entityMap should be pre-calculated and cached within the service or index.Lists struct, ideally when indexedLists.Update() is called. This would change the lookup from O(N) to O(1) on average for each search.


// Consider non-Latin if >30% of letters are non-Latin
// This threshold catches mixed names like "محمد Ali"
return float64(nonLatinCount)/float64(total) > 0.3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The threshold 0.3 is a magic number. It's better to define it as a named constant at the package level with a comment explaining its purpose and how this value was chosen. This improves readability and makes it easier to adjust in the future.

For example:

// nonLatinCharacterThreshold is the minimum ratio of non-Latin characters
// required to trigger an embedding-based search.
const nonLatinCharacterThreshold = 0.3
Suggested change
return float64(nonLatinCount)/float64(total) > 0.3
return float64(nonLatinCount)/float64(total) > nonLatinCharacterThreshold


### 2. Get the model

We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~450MB).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a discrepancy in the documented model size. This file and docs/search.md state the size is ~450MB, but tools/export_onnx/README.md and the export script itself suggest the ONNX model is closer to 140MB. The original PyTorch model is ~450MB, but the exported ONNX version is smaller. Please update the documentation to reflect the correct size of the file that users need to download/generate to avoid confusion.

Suggested change
We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~450MB).
We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~140MB).


This feature requires:
1. Building with `-tags embeddings`
2. Downloading the ONNX model (~450MB)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a discrepancy in the documented model size. This file and docs/cross-script-matching.md state the size is ~450MB, but tools/export_onnx/README.md and the export script itself suggest the ONNX model is closer to 140MB. The original PyTorch model is ~450MB, but the exported ONNX version is smaller. Please update the documentation to reflect the correct size of the file that users need to download/generate to avoid confusion.

Suggested change
2. Downloading the ONNX model (~450MB)
2. Downloading the ONNX model (~140MB)

Comment on lines +312 to +318
for i := 0; i < len(sorted)-1; i++ {
for j := i + 1; j < len(sorted); j++ {
if sorted[i] > sorted[j] {
sorted[i], sorted[j] = sorted[j], sorted[i]
}
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This sorting implementation is a bubble sort. While it works for this test case, it's inefficient (O(n^2)). It would be more idiomatic and performant to use sort.Slice from the standard library, which provides a O(n log n) sort.

sort.Slice(sorted, func(i, j int) bool {
			return sorted[i] < sorted[j]
		})

cd tools/export_onnx
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you verify these steps run correctly? I'm having a lot of trouble getting the dependencies installed.

Can you write up a Dockerfile that has all of these steps included? We'd like to ship an image that "just works".

Copy link
Contributor Author

@MorganaFuture MorganaFuture Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added dockerfile. Could you please try?

@adamdecaf
Copy link
Member

@MorganaFuture getting back to this. Thanks for the docker image - I'm running into a problem with the model.

ts=2026-01-23T15:52:55Z msg="failed to rebuild embedding index: building embedding index: embeddings: failed to encode batch 0: inference failed: unimplemented ONNX op \"ReduceSum\" in Node \"/1/ReduceSum\" [ReduceSum](/1/Mul_1_output_0, onnx::ReduceSum_1922) -> /1/ReduceSum_output_0 - attrs[keepdims (INT)]" app=watchman level=warn version=

This is after running the docker image to output the ~500MB model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants