-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inference] propagate connector config to underlying adapter #207202
[inference] propagate connector config to underlying adapter #207202
Conversation
/ci |
dbea356
to
7c96174
Compare
/ci |
/ci |
* configuration (without secrets) of the connector. | ||
* the list of properties depends on the connector type (and subtype for inference) | ||
*/ | ||
config: Record<string, string>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to create a more strict type for the config property? By this I mean a type definition for each connector config and then use the union of those types instead of Record<string, string>
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be a lot of work for very little upsides, to be honest
- the connector configs aren't directly typed, it's inferred from their schemas, and those types aren't directly exposed from the stack_connectors plugin, so there's no real way to retrieve that type properly atm.
- more importantly, the
inference
connector type can support any kind of sub-type, each with their own list of allowed properties, and all of this is being typed as a plain record even in the stack_connector plugin, so we would be trying to strongly type something that is a mostly generic type anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation. Makes sense.
…dd-connector-config
💛 Build succeeded, but was flaky
Failed CI StepsTest Failures
Metrics [docs]Public APIs missing comments
History
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, sorry for the delay
Starting backport for target branches: 8.x |
…#207202) ## Summary Related to elastic#206710 Attach the `configuration` from the stack connector to the internal `InferenceConnector` structure we're passing down to inference adapters. This will allow the adapters to retrieve information from the stack connector, which will be useful to introduce more provider specific logic (for example, automatically detect if the underlying provider supports native function calling or not). This is also a requirement for the langchain bridge, as the chatModel will need to know which type of provider is used under the hood. (cherry picked from commit a69236d)
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
…207202) (#208048) # Backport This will backport the following commits from `main` to `8.x`: - [[inference] propagate connector config to underlying adapter (#207202)](#207202) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Pierre Gayvallet","email":"pierre.gayvallet@elastic.co"},"sourceCommit":{"committedDate":"2025-01-23T12:56:15Z","message":"[inference] propagate connector config to underlying adapter (#207202)\n\n## Summary\r\n\r\nRelated to https://github.com/elastic/kibana/issues/206710\r\n\r\nAttach the `configuration` from the stack connector to the internal\r\n`InferenceConnector` structure we're passing down to inference adapters.\r\n\r\nThis will allow the adapters to retrieve information from the stack\r\nconnector, which will be useful to introduce more provider specific\r\nlogic (for example, automatically detect if the underlying provider\r\nsupports native function calling or not).\r\n\r\nThis is also a requirement for the langchain bridge, as the chatModel\r\nwill need to know which type of provider is used under the hood.","sha":"a69236d048f86300110c2b93e995c10cc6d4c510","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:skip","v9.0.0","backport:version","v8.18.0"],"title":"[inference] propagate connector config to underlying adapter","number":207202,"url":"https://github.com/elastic/kibana/pull/207202","mergeCommit":{"message":"[inference] propagate connector config to underlying adapter (#207202)\n\n## Summary\r\n\r\nRelated to https://github.com/elastic/kibana/issues/206710\r\n\r\nAttach the `configuration` from the stack connector to the internal\r\n`InferenceConnector` structure we're passing down to inference adapters.\r\n\r\nThis will allow the adapters to retrieve information from the stack\r\nconnector, which will be useful to introduce more provider specific\r\nlogic (for example, automatically detect if the underlying provider\r\nsupports native function calling or not).\r\n\r\nThis is also a requirement for the langchain bridge, as the chatModel\r\nwill need to know which type of provider is used under the hood.","sha":"a69236d048f86300110c2b93e995c10cc6d4c510"}},"sourceBranch":"main","suggestedTargetBranches":["8.x"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/207202","number":207202,"mergeCommit":{"message":"[inference] propagate connector config to underlying adapter (#207202)\n\n## Summary\r\n\r\nRelated to https://github.com/elastic/kibana/issues/206710\r\n\r\nAttach the `configuration` from the stack connector to the internal\r\n`InferenceConnector` structure we're passing down to inference adapters.\r\n\r\nThis will allow the adapters to retrieve information from the stack\r\nconnector, which will be useful to introduce more provider specific\r\nlogic (for example, automatically detect if the underlying provider\r\nsupports native function calling or not).\r\n\r\nThis is also a requirement for the langchain bridge, as the chatModel\r\nwill need to know which type of provider is used under the hood.","sha":"a69236d048f86300110c2b93e995c10cc6d4c510"}},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Pierre Gayvallet <pierre.gayvallet@elastic.co>
…#207202) ## Summary Related to elastic#206710 Attach the `configuration` from the stack connector to the internal `InferenceConnector` structure we're passing down to inference adapters. This will allow the adapters to retrieve information from the stack connector, which will be useful to introduce more provider specific logic (for example, automatically detect if the underlying provider supports native function calling or not). This is also a requirement for the langchain bridge, as the chatModel will need to know which type of provider is used under the hood.
…#207202) ## Summary Related to elastic#206710 Attach the `configuration` from the stack connector to the internal `InferenceConnector` structure we're passing down to inference adapters. This will allow the adapters to retrieve information from the stack connector, which will be useful to introduce more provider specific logic (for example, automatically detect if the underlying provider supports native function calling or not). This is also a requirement for the langchain bridge, as the chatModel will need to know which type of provider is used under the hood.
…#207202) ## Summary Related to elastic#206710 Attach the `configuration` from the stack connector to the internal `InferenceConnector` structure we're passing down to inference adapters. This will allow the adapters to retrieve information from the stack connector, which will be useful to introduce more provider specific logic (for example, automatically detect if the underlying provider supports native function calling or not). This is also a requirement for the langchain bridge, as the chatModel will need to know which type of provider is used under the hood.
Summary
Related to #206710
Attach the
configuration
from the stack connector to the internalInferenceConnector
structure we're passing down to inference adapters.This will allow the adapters to retrieve information from the stack connector, which will be useful to introduce more provider specific logic (for example, automatically detect if the underlying provider supports native function calling or not).
This is also a requirement for the langchain bridge, as the chatModel will need to know which type of provider is used under the hood.