Tags: NVIDIA-Merlin/models
Tags
Ensure TopKEncoder has correct outputs when model is saved (#1225) * Remove output_names from base BaseModel. * Add assertion for output signature of saved model to test_topk_encoder * Move compile method from BaseModel to Model * Correct name of structured outputs * Move compile method back and add special case for TopKOutput
Makes RetrievalModelV2 support item tower with transforms (e.g. pre-t… …rained embeddings) (#1198) * Making retrieval model to_top_k_model(), candidate_embeddings() and batch_predict() support Loader with transforms for pre-trained embeddings in item tower * Fixing test error and ensuring all batch_predict() with the new API support Loader with transforms (which include pre-trained embeddings) * Fixing retrieval example, which was using wrong schema to export query and item embeddings * Added missing importorskip on torch and pytorch_lightning for torch integration tests * Skiping a test if nvtabular is available
Add Model class (#1126) * Add Model class * Use BlockContainer in Models class * Add unit tests * Add docstrings * Add module_utils unit tests * Remove future work * move initialize() to module_utils * handle batch in training_step * Add output_schema() * check if model outputs have no targets when no target is provided * put loss and metrics on the same device * add docstrings to module_utils functions * move metric device setting to initialize * update logic for using model output targets --------- Co-authored-by: Marc Romeyn <marcromeyn@gmail.com>
MM transformer training and serving example (#1045) Add inference to the next item prediction example (session-based) * extend example * align unit test * add merlin-systems to envs --------- Co-authored-by: Karl Higley <kmhigley@gmail.com> Co-authored-by: Benedikt Schifferer <bschifferer@nvidia.com>
PreviousNext