Skip to content

Auto-generated code for 9.0 #2985

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: 9.0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion elasticsearch/_async/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -637,6 +637,8 @@ async def bulk(
Imagine a <code>_bulk?refresh=wait_for</code> request with three documents in it that happen to be routed to different shards in an index with five shards.
The request will only wait for those three shards to refresh.
The other two shards that make up the index do not participate in the <code>_bulk</code> request at all.</p>
<p>You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests.
Refer to the linked documentation for step-by-step instructions using the index settings API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-bulk>`_
Expand Down Expand Up @@ -5867,7 +5869,8 @@ async def termvectors(
The information is only retrieved for the shard the requested document resides in.
The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context.
By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected.
Use <code>routing</code> only to hit a particular shard.</p>
Use <code>routing</code> only to hit a particular shard.
Refer to the linked documentation for detailed examples of how to use this API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-termvectors>`_
Expand Down
7 changes: 4 additions & 3 deletions elasticsearch/_async/client/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ async def allocation_explain(
Get explanations for shard allocations in the cluster.
For unassigned shards, it provides an explanation for why the shard is unassigned.
For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node.
This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.</p>
This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
Refer to the linked documentation for examples of how to troubleshoot allocation issues using this API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-cluster-allocation-explain>`_
Expand Down Expand Up @@ -870,9 +871,9 @@ async def put_settings(

:param flat_settings: Return settings in flat format (default: false)
:param master_timeout: Explicit operation timeout for connection to master node
:param persistent:
:param persistent: The settings that persist after the cluster restarts.
:param timeout: Explicit operation timeout
:param transient:
:param transient: The settings that do not persist after the cluster restarts.
"""
__path_parts: t.Dict[str, str] = {}
__path = "/_cluster/settings"
Expand Down
10 changes: 6 additions & 4 deletions elasticsearch/_async/client/esql.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ class EsqlClient(NamespacedClient):
"columnar",
"filter",
"include_ccs_metadata",
"keep_alive",
"keep_on_completion",
"locale",
"params",
"profile",
Expand Down Expand Up @@ -145,10 +147,6 @@ async def async_query(
__query["format"] = format
if human is not None:
__query["human"] = human
if keep_alive is not None:
__query["keep_alive"] = keep_alive
if keep_on_completion is not None:
__query["keep_on_completion"] = keep_on_completion
if pretty is not None:
__query["pretty"] = pretty
if not __body:
Expand All @@ -160,6 +158,10 @@ async def async_query(
__body["filter"] = filter
if include_ccs_metadata is not None:
__body["include_ccs_metadata"] = include_ccs_metadata
if keep_alive is not None:
__body["keep_alive"] = keep_alive
if keep_on_completion is not None:
__body["keep_on_completion"] = keep_on_completion
if locale is not None:
__body["locale"] = locale
if params is not None:
Expand Down
33 changes: 31 additions & 2 deletions elasticsearch/_async/client/indices.py
Original file line number Diff line number Diff line change
Expand Up @@ -3861,16 +3861,45 @@ async def put_settings(
Changes dynamic index settings in real time.
For data streams, index setting changes are applied to all backing indices by default.</p>
<p>To revert a setting to the default value, use a null value.
The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation.
The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation.
To preserve existing settings from being updated, set the <code>preserve_existing</code> parameter to <code>true</code>.</p>
<p>For performance optimization during bulk indexing, you can disable the refresh interval.
Refer to <a href="https://www.elastic.co/docs/deploy-manage/production-guidance/optimize-performance/indexing-speed#disable-refresh-interval">disable refresh interval</a> for an example.
There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:</p>
<pre><code>{
&quot;number_of_replicas&quot;: 1
}
</code></pre>
<p>Or you can use an <code>index</code> setting object:</p>
<pre><code>{
&quot;index&quot;: {
&quot;number_of_replicas&quot;: 1
}
}
</code></pre>
<p>Or you can use dot annotation:</p>
<pre><code>{
&quot;index.number_of_replicas&quot;: 1
}
</code></pre>
<p>Or you can embed any of the aforementioned options in a <code>settings</code> object. For example:</p>
<pre><code>{
&quot;settings&quot;: {
&quot;index&quot;: {
&quot;number_of_replicas&quot;: 1
}
}
}
</code></pre>
<p>NOTE: You can only define new analyzers on closed indices.
To add an analyzer, you must close the index, define the analyzer, and reopen the index.
You cannot close the write index of a data stream.
To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream.
Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices.
This affects searches and any new data added to the stream after the rollover.
However, it does not affect the data stream's backing indices or their existing data.
To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.</p>
To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
Refer to <a href="https://www.elastic.co/docs/manage-data/data-store/text-analysis/specify-an-analyzer#update-analyzers-on-existing-indices">updating analyzers on existing indices</a> for step-by-step examples.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-settings>`_
Expand Down
24 changes: 22 additions & 2 deletions elasticsearch/_async/client/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,13 +374,33 @@ async def put(
<p>IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face.
For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models.
However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.</p>
<p>The following integrations are available through the inference API. You can find the available task types next to the integration name:</p>
<ul>
<li>AlibabaCloud AI Search (<code>completion</code>, <code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code>)</li>
<li>Amazon Bedrock (<code>completion</code>, <code>text_embedding</code>)</li>
<li>Anthropic (<code>completion</code>)</li>
<li>Azure AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
<li>Azure OpenAI (<code>completion</code>, <code>text_embedding</code>)</li>
<li>Cohere (<code>completion</code>, <code>rerank</code>, <code>text_embedding</code>)</li>
<li>Elasticsearch (<code>rerank</code>, <code>sparse_embedding</code>, <code>text_embedding</code> - this service is for built-in models and models uploaded through Eland)</li>
<li>ELSER (<code>sparse_embedding</code>)</li>
<li>Google AI Studio (<code>completion</code>, <code>text_embedding</code>)</li>
<li>Google Vertex AI (<code>rerank</code>, <code>text_embedding</code>)</li>
<li>Hugging Face (<code>text_embedding</code>)</li>
<li>Mistral (<code>text_embedding</code>)</li>
<li>OpenAI (<code>chat_completion</code>, <code>completion</code>, <code>text_embedding</code>)</li>
<li>VoyageAI (<code>text_embedding</code>, <code>rerank</code>)</li>
<li>Watsonx inference integration (<code>text_embedding</code>)</li>
<li>JinaAI (<code>text_embedding</code>, <code>rerank</code>)</li>
</ul>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-inference-put>`_

:param inference_id: The inference Id
:param inference_config:
:param task_type: The task type
:param task_type: The task type. Refer to the integration list in the API description
for the available task types.
"""
if inference_id in SKIP_IN_PATH:
raise ValueError("Empty value passed for parameter 'inference_id'")
Expand Down Expand Up @@ -543,7 +563,7 @@ async def put_amazonbedrock(
.. raw:: html

<p>Create an Amazon Bedrock inference endpoint.</p>
<p>Creates an inference endpoint to perform an inference task with the <code>amazonbedrock</code> service.</p>
<p>Create an inference endpoint to perform an inference task with the <code>amazonbedrock</code> service.</p>
<blockquote>
<p>info
You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.</p>
Expand Down
3 changes: 2 additions & 1 deletion elasticsearch/_async/client/ml.py
Original file line number Diff line number Diff line change
Expand Up @@ -3549,7 +3549,8 @@ async def put_datafeed(
Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job.
You can associate only one datafeed with each anomaly detection job.
The datafeed contains a query that runs at a defined interval (<code>frequency</code>).
If you are concerned about delayed data, you can add a delay (<code>query_delay') at each interval. By default, the datafeed uses the following query: </code>{&quot;match_all&quot;: {&quot;boost&quot;: 1}}`.</p>
If you are concerned about delayed data, you can add a delay (<code>query_delay</code>) at each interval.
By default, the datafeed uses the following query: <code>{&quot;match_all&quot;: {&quot;boost&quot;: 1}}</code>.</p>
<p>When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had
at the time of creation and runs the query using those same roles. If you provide secondary authorization headers,
those credentials are used instead.
Expand Down
5 changes: 4 additions & 1 deletion elasticsearch/_sync/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -635,6 +635,8 @@ def bulk(
Imagine a <code>_bulk?refresh=wait_for</code> request with three documents in it that happen to be routed to different shards in an index with five shards.
The request will only wait for those three shards to refresh.
The other two shards that make up the index do not participate in the <code>_bulk</code> request at all.</p>
<p>You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests.
Refer to the linked documentation for step-by-step instructions using the index settings API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-bulk>`_
Expand Down Expand Up @@ -5865,7 +5867,8 @@ def termvectors(
The information is only retrieved for the shard the requested document resides in.
The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context.
By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected.
Use <code>routing</code> only to hit a particular shard.</p>
Use <code>routing</code> only to hit a particular shard.
Refer to the linked documentation for detailed examples of how to use this API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-termvectors>`_
Expand Down
7 changes: 4 additions & 3 deletions elasticsearch/_sync/client/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ def allocation_explain(
Get explanations for shard allocations in the cluster.
For unassigned shards, it provides an explanation for why the shard is unassigned.
For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node.
This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.</p>
This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
Refer to the linked documentation for examples of how to troubleshoot allocation issues using this API.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-cluster-allocation-explain>`_
Expand Down Expand Up @@ -870,9 +871,9 @@ def put_settings(

:param flat_settings: Return settings in flat format (default: false)
:param master_timeout: Explicit operation timeout for connection to master node
:param persistent:
:param persistent: The settings that persist after the cluster restarts.
:param timeout: Explicit operation timeout
:param transient:
:param transient: The settings that do not persist after the cluster restarts.
"""
__path_parts: t.Dict[str, str] = {}
__path = "/_cluster/settings"
Expand Down
10 changes: 6 additions & 4 deletions elasticsearch/_sync/client/esql.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ class EsqlClient(NamespacedClient):
"columnar",
"filter",
"include_ccs_metadata",
"keep_alive",
"keep_on_completion",
"locale",
"params",
"profile",
Expand Down Expand Up @@ -145,10 +147,6 @@ def async_query(
__query["format"] = format
if human is not None:
__query["human"] = human
if keep_alive is not None:
__query["keep_alive"] = keep_alive
if keep_on_completion is not None:
__query["keep_on_completion"] = keep_on_completion
if pretty is not None:
__query["pretty"] = pretty
if not __body:
Expand All @@ -160,6 +158,10 @@ def async_query(
__body["filter"] = filter
if include_ccs_metadata is not None:
__body["include_ccs_metadata"] = include_ccs_metadata
if keep_alive is not None:
__body["keep_alive"] = keep_alive
if keep_on_completion is not None:
__body["keep_on_completion"] = keep_on_completion
if locale is not None:
__body["locale"] = locale
if params is not None:
Expand Down
33 changes: 31 additions & 2 deletions elasticsearch/_sync/client/indices.py
Original file line number Diff line number Diff line change
Expand Up @@ -3861,16 +3861,45 @@ def put_settings(
Changes dynamic index settings in real time.
For data streams, index setting changes are applied to all backing indices by default.</p>
<p>To revert a setting to the default value, use a null value.
The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation.
The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation.
To preserve existing settings from being updated, set the <code>preserve_existing</code> parameter to <code>true</code>.</p>
<p>For performance optimization during bulk indexing, you can disable the refresh interval.
Refer to <a href="https://www.elastic.co/docs/deploy-manage/production-guidance/optimize-performance/indexing-speed#disable-refresh-interval">disable refresh interval</a> for an example.
There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:</p>
<pre><code>{
&quot;number_of_replicas&quot;: 1
}
</code></pre>
<p>Or you can use an <code>index</code> setting object:</p>
<pre><code>{
&quot;index&quot;: {
&quot;number_of_replicas&quot;: 1
}
}
</code></pre>
<p>Or you can use dot annotation:</p>
<pre><code>{
&quot;index.number_of_replicas&quot;: 1
}
</code></pre>
<p>Or you can embed any of the aforementioned options in a <code>settings</code> object. For example:</p>
<pre><code>{
&quot;settings&quot;: {
&quot;index&quot;: {
&quot;number_of_replicas&quot;: 1
}
}
}
</code></pre>
<p>NOTE: You can only define new analyzers on closed indices.
To add an analyzer, you must close the index, define the analyzer, and reopen the index.
You cannot close the write index of a data stream.
To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream.
Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices.
This affects searches and any new data added to the stream after the rollover.
However, it does not affect the data stream's backing indices or their existing data.
To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.</p>
To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
Refer to <a href="https://www.elastic.co/docs/manage-data/data-store/text-analysis/specify-an-analyzer#update-analyzers-on-existing-indices">updating analyzers on existing indices</a> for step-by-step examples.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-put-settings>`_
Expand Down
Loading