Skip to content

Commit 5aa201f

Browse files
Auto-generated API code (#2936)
1 parent 7e41af8 commit 5aa201f

File tree

4 files changed

+162
-21
lines changed

4 files changed

+162
-21
lines changed

docs/reference/api-reference.md

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1015,6 +1015,7 @@ client.index({ index })
10151015
## client.info [_info]
10161016
Get cluster info.
10171017
Get basic build, version, and cluster information.
1018+
::: In Serverless, this API is retained for backward compatibility only. Some response fields, such as the version number, should be ignored.
10181019

10191020
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-info)
10201021

@@ -4624,7 +4625,12 @@ To override the default behavior, you can set the `esql.query.allow_partial_resu
46244625
It is valid only for the CSV format.
46254626
- **`drop_null_columns` (Optional, boolean)**: Indicates whether columns that are entirely `null` will be removed from the `columns` and `values` portion of the results.
46264627
If `true`, the response will include an extra section under the name `all_columns` which has the name of all the columns.
4627-
- **`format` (Optional, Enum("csv" \| "json" \| "tsv" \| "txt" \| "yaml" \| "cbor" \| "smile" \| "arrow"))**: A short version of the Accept header, for example `json` or `yaml`.
4628+
- **`format` (Optional, Enum("csv" \| "json" \| "tsv" \| "txt" \| "yaml" \| "cbor" \| "smile" \| "arrow"))**: A short version of the Accept header, e.g. json, yaml.
4629+
4630+
`csv`, `tsv`, and `txt` formats will return results in a tabular format, excluding other metadata fields from the response.
4631+
4632+
For async requests, nothing will be returned if the async query doesn't finish within the timeout.
4633+
The query ID and running status are available in the `X-Elasticsearch-Async-Id` and `X-Elasticsearch-Async-Is-Running` HTTP headers of the response, respectively.
46284634

46294635
## client.esql.asyncQueryDelete [_esql.async_query_delete]
46304636
Delete an async ES|QL query.
@@ -4745,6 +4751,8 @@ name and the next level key is the column name.
47454751
object with information about the clusters that participated in the search along with info such as shards
47464752
count.
47474753
- **`format` (Optional, Enum("csv" \| "json" \| "tsv" \| "txt" \| "yaml" \| "cbor" \| "smile" \| "arrow"))**: A short version of the Accept header, e.g. json, yaml.
4754+
4755+
`csv`, `tsv`, and `txt` formats will return results in a tabular format, excluding other metadata fields from the response.
47484756
- **`delimiter` (Optional, string)**: The character to use between values within a CSV row. Only valid for the CSV format.
47494757
- **`drop_null_columns` (Optional, boolean)**: Should columns that are entirely `null` be removed from the `columns` and `values` portion of the results?
47504758
Defaults to `false`. If `true` then the response will include an extra section under the name `all_columns` which has the name of all columns.
@@ -7612,6 +7620,7 @@ However, if you do not plan to use the inference APIs to use these models or if
76127620
The following integrations are available through the inference API. You can find the available task types next to the integration name:
76137621
* AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`)
76147622
* Amazon Bedrock (`completion`, `text_embedding`)
7623+
* Amazon SageMaker (`chat_completion`, `completion`, `rerank`, `sparse_embedding`, `text_embedding`)
76157624
* Anthropic (`completion`)
76167625
* Azure AI Studio (`completion`, `text_embedding`)
76177626
* Azure OpenAI (`completion`, `text_embedding`)
@@ -7692,14 +7701,28 @@ These settings are specific to the task type you specified.
76927701
- **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference endpoint to be created.
76937702

76947703
## client.inference.putAmazonsagemaker [_inference.put_amazonsagemaker]
7695-
Configure a Amazon SageMaker inference endpoint
7704+
Create an Amazon SageMaker inference endpoint.
76967705

7697-
[Endpoint documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-service-amazon-sagemaker.html)
7706+
Create an inference endpoint to perform an inference task with the `amazon_sagemaker` service.
7707+
7708+
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonsagemaker)
76987709

76997710
```ts
7700-
client.inference.putAmazonsagemaker()
7711+
client.inference.putAmazonsagemaker({ task_type, amazonsagemaker_inference_id, service, service_settings })
77017712
```
77027713

7714+
### Arguments [_arguments_inference.put_amazonsagemaker]
7715+
7716+
#### Request (object) [_request_inference.put_amazonsagemaker]
7717+
- **`task_type` (Enum("text_embedding" \| "completion" \| "chat_completion" \| "sparse_embedding" \| "rerank"))**: The type of the inference task that the model will perform.
7718+
- **`amazonsagemaker_inference_id` (string)**: The unique identifier of the inference endpoint.
7719+
- **`service` (Enum("amazon_sagemaker"))**: The type of service supported for the specified task type. In this case, `amazon_sagemaker`.
7720+
- **`service_settings` ({ access_key, endpoint_name, api, region, secret_key, target_model, target_container_hostname, inference_component_name, batch_size, dimensions })**: Settings used to install the inference model.
7721+
These settings are specific to the `amazon_sagemaker` service and `service_settings.api` you specified.
7722+
- **`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, strategy })**: The chunking configuration object.
7723+
- **`task_settings` (Optional, { custom_attributes, enable_explanations, inference_id, session_id, target_variant })**: Settings to configure the inference task.
7724+
These settings are specific to the task type and `service_settings.api` you specified.
7725+
- **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference endpoint to be created.
77037726

77047727
## client.inference.putAnthropic [_inference.put_anthropic]
77057728
Create an Anthropic inference endpoint.

src/api/api/inference.ts

Lines changed: 32 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -139,8 +139,15 @@ export default class Inference {
139139
'task_type',
140140
'amazonsagemaker_inference_id'
141141
],
142-
body: [],
143-
query: []
142+
body: [
143+
'chunking_settings',
144+
'service',
145+
'service_settings',
146+
'task_settings'
147+
],
148+
query: [
149+
'timeout'
150+
]
144151
},
145152
'inference.put_anthropic': {
146153
path: [
@@ -716,7 +723,7 @@ export default class Inference {
716723
}
717724

718725
/**
719-
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * DeepSeek (`completion`, `chat_completion`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`) * Mistral (`chat_completion`, `completion`, `text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`)
726+
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Amazon SageMaker (`chat_completion`, `completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * DeepSeek (`completion`, `chat_completion`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`) * Mistral (`chat_completion`, `completion`, `text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`)
720727
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put | Elasticsearch API documentation}
721728
*/
722729
async put (this: That, params: T.InferencePutRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutResponse>
@@ -887,15 +894,17 @@ export default class Inference {
887894
}
888895

889896
/**
890-
* Configure a Amazon SageMaker inference endpoint
891-
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/9.1/infer-service-amazon-sagemaker.html | Elasticsearch API documentation}
897+
* Create an Amazon SageMaker inference endpoint. Create an inference endpoint to perform an inference task with the `amazon_sagemaker` service.
898+
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonsagemaker | Elasticsearch API documentation}
892899
*/
893-
async putAmazonsagemaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>
894-
async putAmazonsagemaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>
895-
async putAmazonsagemaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise<T.TODO>
896-
async putAmazonsagemaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise<any> {
900+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutAmazonsagemakerResponse>
901+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.InferencePutAmazonsagemakerResponse, unknown>>
902+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptions): Promise<T.InferencePutAmazonsagemakerResponse>
903+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptions): Promise<any> {
897904
const {
898-
path: acceptedPath
905+
path: acceptedPath,
906+
body: acceptedBody,
907+
query: acceptedQuery
899908
} = this.acceptedParams['inference.put_amazonsagemaker']
900909

901910
const userQuery = params?.querystring
@@ -911,12 +920,22 @@ export default class Inference {
911920
}
912921
}
913922

914-
params = params ?? {}
915923
for (const key in params) {
916-
if (acceptedPath.includes(key)) {
924+
if (acceptedBody.includes(key)) {
925+
body = body ?? {}
926+
// @ts-expect-error
927+
body[key] = params[key]
928+
} else if (acceptedPath.includes(key)) {
917929
continue
918930
} else if (key !== 'body' && key !== 'querystring') {
919-
querystring[key] = params[key]
931+
if (acceptedQuery.includes(key) || commonQueryParams.includes(key)) {
932+
// @ts-expect-error
933+
querystring[key] = params[key]
934+
} else {
935+
body = body ?? {}
936+
// @ts-expect-error
937+
body[key] = params[key]
938+
}
920939
}
921940
}
922941

src/api/api/info.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ const acceptedParams: Record<string, { path: string[], body: string[], query: st
3535
}
3636

3737
/**
38-
* Get cluster info. Get basic build, version, and cluster information.
38+
* Get cluster info. Get basic build, version, and cluster information. ::: In Serverless, this API is retained for backward compatibility only. Some response fields, such as the version number, should be ignored.
3939
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-info | Elasticsearch API documentation}
4040
*/
4141
export default async function InfoApi (this: That, params?: T.InfoRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InfoResponse>

0 commit comments

Comments
 (0)