VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased)#8949
VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased)#8949erikeldridge merged 23 commits intovaihi-expfrom
Conversation
We only define system prompts via onDeviceParams initialization.
|
Vertex AI Mock Responses Check
|
Size Report 1Affected Products
Test Logs |
Size Analysis Report 1Affected Products
Test Logs |
Fix languageCode parameter in action_code_url (#8912) * Fix languageCode parameter in action_code_url * Add changeset Vaihi add langmodel types. (#8927) * Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl * Adding LanguageModel types. * Remove bunch of exports * yarn formatted * after lint Define HybridParams (#8935) Co-authored-by: Erik Eldridge <erikeldridge@google.com> Adding smoke test for new hybrid params (#8937) * Adding smoke test for new hybrid params * Use the existing name of the model params input --------- Co-authored-by: Erik Eldridge <erikeldridge@google.com> Moving to in-cloud naming (#8938) Co-authored-by: Erik Eldridge <erikeldridge@google.com> Moving to string type for the inference mode (#8941) Define ChromeAdapter class (#8942) Co-authored-by: Erik Eldridge <erikeldridge@google.com> VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943) Adding count token impl (#8950) VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949) Define values for Availability enum (#8951) VinF Hybrid Inference: narrow Chrome input type (#8953) Add image inference support (#8954) * Adding image based input for inference * adding image as input to create language model object disable count tokens api for on-device inference (#8962) VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965)
Fix languageCode parameter in action_code_url (#8912) * Fix languageCode parameter in action_code_url * Add changeset Vaihi add langmodel types. (#8927) * Adding LanguageModel types. These are based off https://github.com/webmachinelearning/prompt-api?tab=readme-ov-file#full-api-surface-in-web-idl * Adding LanguageModel types. * Remove bunch of exports * yarn formatted * after lint Define HybridParams (#8935) Co-authored-by: Erik Eldridge <erikeldridge@google.com> Adding smoke test for new hybrid params (#8937) * Adding smoke test for new hybrid params * Use the existing name of the model params input --------- Co-authored-by: Erik Eldridge <erikeldridge@google.com> Moving to in-cloud naming (#8938) Co-authored-by: Erik Eldridge <erikeldridge@google.com> Moving to string type for the inference mode (#8941) Define ChromeAdapter class (#8942) Co-authored-by: Erik Eldridge <erikeldridge@google.com> VinF Hybrid Inference: Implement ChromeAdapter (rebased) (#8943) Adding count token impl (#8950) VinF Hybrid Inference #4: ChromeAdapter in stream methods (rebased) (#8949) Define values for Availability enum (#8951) VinF Hybrid Inference: narrow Chrome input type (#8953) Add image inference support (#8954) * Adding image based input for inference * adding image as input to create language model object disable count tokens api for on-device inference (#8962) VinF Hybrid Inference: throw if only_on_device and model is unavailable (#8965) VinF Hybrid Inference: update docs (#8970) VinF Hybrid Inference: consolidate onDeviceParams initialization (#8969) VinF Hybrid Inference: disable multi-turn support (#8973) VinF Hybrid Inference: remove default expected input types (#8974) VinF Hybrid Inference: set image (and text) as default input type (#8984) VinF Hybrid Inference: log debug messages in conditional logic (#8992)
#8943 enabled non-streaming methods to use Chrome's on-device model. This PR does the same for the streaming methods (generateContentStream and sendMessageStream).
Replaces #8918, since we're now merging into a non-main branch for prerelease.