-
Notifications
You must be signed in to change notification settings - Fork 112
search: add cross-script name matching using neural embeddings #696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings. When a non-Latin query comes in, we convert it to a 384-dim vector and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" with 97% similarity. Key changes: - New internal/embeddings package with ONNX model integration - Hybrid search: embeddings for non-Latin, Jaro-Winkler for Latin - LRU cache for embeddings (~5µs after warm-up) - Build with -tags embeddings to enable
Summary of ChangesHello @MorganaFuture, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the search capabilities by introducing neural embeddings to enable cross-script name matching. This allows the system to semantically understand and match names written in different languages and scripts, such as Arabic or Cyrillic, against Latin-script names in sanctions lists. The implementation uses a hybrid approach, leveraging embeddings for non-Latin queries and retaining existing string matching for Latin queries, ensuring both accuracy for diverse inputs and performance efficiency. This feature aims to drastically improve the precision and recall for international name screening. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant and well-implemented feature for cross-script name matching using neural embeddings. The changes are comprehensive, covering the core embedding logic, integration with the search service, configuration, extensive testing, and excellent documentation. The use of build tags to make this an optional feature is a great approach. My feedback focuses on a few areas to improve performance and code clarity, particularly around caching and data lookups.
| embeddings, err := s.model.encode(batch) | ||
| if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The BuildIndex function calls s.model.encode directly, which bypasses the service-level caching implemented in EncodeBatch. If the list of names contains duplicates, they will be re-encoded unnecessarily, which is inefficient.
You should use s.EncodeBatch here to take advantage of the caching layer. This will improve performance, especially during index rebuilds where many names might be repeated from the previous index.
| embeddings, err := s.model.encode(batch) | |
| if err != nil { | |
| embeddings, err := s.EncodeBatch(ctx, batch) |
| entityMap := make(map[string]search.Entity[search.Value], len(searchEntities)) | ||
| for _, e := range searchEntities { | ||
| entityMap[e.SourceID] = e | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part of the code builds a map of all entities on every embedding search. For large datasets, this can be a significant performance bottleneck as it involves iterating over all entities and allocating a new map for each search request.
To improve performance, this entityMap should be pre-calculated and cached within the service or index.Lists struct, ideally when indexedLists.Update() is called. This would change the lookup from O(N) to O(1) on average for each search.
|
|
||
| // Consider non-Latin if >30% of letters are non-Latin | ||
| // This threshold catches mixed names like "محمد Ali" | ||
| return float64(nonLatinCount)/float64(total) > 0.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The threshold 0.3 is a magic number. It's better to define it as a named constant at the package level with a comment explaining its purpose and how this value was chosen. This improves readability and makes it easier to adjust in the future.
For example:
// nonLatinCharacterThreshold is the minimum ratio of non-Latin characters
// required to trigger an embedding-based search.
const nonLatinCharacterThreshold = 0.3| return float64(nonLatinCount)/float64(total) > 0.3 | |
| return float64(nonLatinCount)/float64(total) > nonLatinCharacterThreshold |
|
|
||
| ### 2. Get the model | ||
|
|
||
| We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~450MB). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a discrepancy in the documented model size. This file and docs/search.md state the size is ~450MB, but tools/export_onnx/README.md and the export script itself suggest the ONNX model is closer to 140MB. The original PyTorch model is ~450MB, but the exported ONNX version is smaller. Please update the documentation to reflect the correct size of the file that users need to download/generate to avoid confusion.
| We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~450MB). | |
| We use `paraphrase-multilingual-MiniLM-L12-v2` in ONNX format (~140MB). |
|
|
||
| This feature requires: | ||
| 1. Building with `-tags embeddings` | ||
| 2. Downloading the ONNX model (~450MB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a discrepancy in the documented model size. This file and docs/cross-script-matching.md state the size is ~450MB, but tools/export_onnx/README.md and the export script itself suggest the ONNX model is closer to 140MB. The original PyTorch model is ~450MB, but the exported ONNX version is smaller. Please update the documentation to reflect the correct size of the file that users need to download/generate to avoid confusion.
| 2. Downloading the ONNX model (~450MB) | |
| 2. Downloading the ONNX model (~140MB) |
| for i := 0; i < len(sorted)-1; i++ { | ||
| for j := i + 1; j < len(sorted); j++ { | ||
| if sorted[i] > sorted[j] { | ||
| sorted[i], sorted[j] = sorted[j], sorted[i] | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sorting implementation is a bubble sort. While it works for this test case, it's inefficient (O(n^2)). It would be more idiomatic and performant to use sort.Slice from the standard library, which provides a O(n log n) sort.
sort.Slice(sorted, func(i, j int) bool {
return sorted[i] < sorted[j]
})| cd tools/export_onnx | ||
| python3 -m venv venv | ||
| source venv/bin/activate | ||
| pip install -r requirements.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you verify these steps run correctly? I'm having a lot of trouble getting the dependencies installed.
Can you write up a Dockerfile that has all of these steps included? We'd like to ship an image that "just works".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added dockerfile. Could you please try?
|
@MorganaFuture getting back to this. Thanks for the docker image - I'm running into a problem with the model. This is after running the docker image to output the ~500MB model. |
Adds support for matching names across different writing systems (Arabic, Cyrillic, Chinese, etc.) using multilingual embeddings.
When a non-Latin query comes in, we convert it to a 384-dim vector and find similar vectors in the index. This lets us match "محمد علي" to "Mohamed Ali" with 97% similarity.
Key changes: