Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[META] Generic ML inference processor for ingestion pipeline #2414

Closed
ylwu-amzn opened this issue May 7, 2024 · 1 comment
Closed

[META] Generic ML inference processor for ingestion pipeline #2414

ylwu-amzn opened this issue May 7, 2024 · 1 comment
Labels
enhancement New feature or request v2.14.0

Comments

@ylwu-amzn
Copy link
Collaborator

Today we have specific ML/AI ingest processor for different use case. For example:

  1. TextEmbeddingProcessor for generating embeddings with text-embedding models
  2. GenerativeQAResponseProcessor for QA with LLM
  3. PersonalizeRankingResponseProcessor for reranking search result with AWS personalized Service model.

Users have more and more use cases like language identification, NER etc. To avoid the effort to build more and more specific processors, we can build a generic ML inference processor which can enrich data with any ML model.

@ylwu-amzn
Copy link
Collaborator Author

ylwu-amzn commented May 7, 2024

PR merged #2205

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request v2.14.0
Projects
Status: 2.14.0 (Launched)
Status: Released
Development

No branches or pull requests

1 participant