Skip to content

Commit e650bb1

Browse files
Auto-generated API code (#2937)
1 parent 64e4f8c commit e650bb1

File tree

4 files changed

+141
-13
lines changed

4 files changed

+141
-13
lines changed

docs/reference.asciidoc

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5106,7 +5106,12 @@ To override the default behavior, you can set the `esql.query.allow_partial_resu
51065106
It is valid only for the CSV format.
51075107
** *`drop_null_columns` (Optional, boolean)*: Indicates whether columns that are entirely `null` will be removed from the `columns` and `values` portion of the results.
51085108
If `true`, the response will include an extra section under the name `all_columns` which has the name of all the columns.
5109-
** *`format` (Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow"))*: A short version of the Accept header, for example `json` or `yaml`.
5109+
** *`format` (Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow"))*: A short version of the Accept header, e.g. json, yaml.
5110+
5111+
`csv`, `tsv`, and `txt` formats will return results in a tabular format, excluding other metadata fields from the response.
5112+
5113+
For async requests, nothing will be returned if the async query doesn't finish within the timeout.
5114+
The query ID and running status are available in the `X-Elasticsearch-Async-Id` and `X-Elasticsearch-Async-Is-Running` HTTP headers of the response, respectively.
51105115

51115116
[discrete]
51125117
==== async_query_delete
@@ -5215,6 +5220,8 @@ name and the next level key is the column name.
52155220
object with information about the clusters that participated in the search along with info such as shards
52165221
count.
52175222
** *`format` (Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow"))*: A short version of the Accept header, e.g. json, yaml.
5223+
5224+
`csv`, `tsv`, and `txt` formats will return results in a tabular format, excluding other metadata fields from the response.
52185225
** *`delimiter` (Optional, string)*: The character to use between values within a CSV row. Only valid for the CSV format.
52195226
** *`drop_null_columns` (Optional, boolean)*: Should columns that are entirely `null` be removed from the `columns` and `values` portion of the results?
52205227
Defaults to `false`. If `true` then the response will include an extra section under the name `all_columns` which has the name of all columns.
@@ -8196,6 +8203,7 @@ However, if you do not plan to use the inference APIs to use these models or if
81968203
The following integrations are available through the inference API. You can find the available task types next to the integration name:
81978204
* AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`)
81988205
* Amazon Bedrock (`completion`, `text_embedding`)
8206+
* Amazon SageMaker (`chat_completion`, `completion`, `rerank`, `sparse_embedding`, `text_embedding`)
81998207
* Anthropic (`completion`)
82008208
* Azure AI Studio (`completion`, `text_embedding`)
82018209
* Azure OpenAI (`completion`, `text_embedding`)
@@ -8282,12 +8290,29 @@ These settings are specific to the task type you specified.
82828290

82838291
[discrete]
82848292
==== put_amazonsagemaker
8285-
Configure a Amazon SageMaker inference endpoint
8293+
Create an Amazon SageMaker inference endpoint.
8294+
8295+
Create an inference endpoint to perform an inference task with the `amazon_sagemaker` service.
8296+
8297+
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonsagemaker[Endpoint documentation]
82868298
[source,ts]
82878299
----
8288-
client.inference.putAmazonsagemaker()
8300+
client.inference.putAmazonsagemaker({ task_type, amazonsagemaker_inference_id, service, service_settings })
82898301
----
82908302

8303+
[discrete]
8304+
==== Arguments
8305+
8306+
* *Request (object):*
8307+
** *`task_type` (Enum("text_embedding" | "completion" | "chat_completion" | "sparse_embedding" | "rerank"))*: The type of the inference task that the model will perform.
8308+
** *`amazonsagemaker_inference_id` (string)*: The unique identifier of the inference endpoint.
8309+
** *`service` (Enum("amazon_sagemaker"))*: The type of service supported for the specified task type. In this case, `amazon_sagemaker`.
8310+
** *`service_settings` ({ access_key, endpoint_name, api, region, secret_key, target_model, target_container_hostname, inference_component_name, batch_size, dimensions })*: Settings used to install the inference model.
8311+
These settings are specific to the `amazon_sagemaker` service and `service_settings.api` you specified.
8312+
** *`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, strategy })*: The chunking configuration object.
8313+
** *`task_settings` (Optional, { custom_attributes, enable_explanations, inference_id, session_id, target_variant })*: Settings to configure the inference task.
8314+
These settings are specific to the task type and `service_settings.api` you specified.
8315+
** *`timeout` (Optional, string | -1 | 0)*: Specifies the amount of time to wait for the inference endpoint to be created.
82918316

82928317
[discrete]
82938318
==== put_anthropic

src/api/api/inference.ts

Lines changed: 22 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ export default class Inference {
262262
}
263263

264264
/**
265-
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * DeepSeek (`completion`, `chat_completion`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`) * Mistral (`chat_completion`, `completion`, `text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`)
265+
* Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name: * AlibabaCloud AI Search (`completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Amazon Bedrock (`completion`, `text_embedding`) * Amazon SageMaker (`chat_completion`, `completion`, `rerank`, `sparse_embedding`, `text_embedding`) * Anthropic (`completion`) * Azure AI Studio (`completion`, `text_embedding`) * Azure OpenAI (`completion`, `text_embedding`) * Cohere (`completion`, `rerank`, `text_embedding`) * DeepSeek (`completion`, `chat_completion`) * Elasticsearch (`rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland) * ELSER (`sparse_embedding`) * Google AI Studio (`completion`, `text_embedding`) * Google Vertex AI (`rerank`, `text_embedding`) * Hugging Face (`chat_completion`, `completion`, `rerank`, `text_embedding`) * Mistral (`chat_completion`, `completion`, `text_embedding`) * OpenAI (`chat_completion`, `completion`, `text_embedding`) * VoyageAI (`text_embedding`, `rerank`) * Watsonx inference integration (`text_embedding`) * JinaAI (`text_embedding`, `rerank`)
266266
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/put-inference-api.html | Elasticsearch API documentation}
267267
*/
268268
async put (this: That, params: T.InferencePutRequest | TB.InferencePutRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutResponse>
@@ -397,22 +397,34 @@ export default class Inference {
397397
}
398398

399399
/**
400-
* Configure a Amazon SageMaker inference endpoint
401-
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.19/infer-service-amazon-sagemaker.html | Elasticsearch API documentation}
400+
* Create an Amazon SageMaker inference endpoint. Create an inference endpoint to perform an inference task with the `amazon_sagemaker` service.
401+
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonsagemaker | Elasticsearch API documentation}
402402
*/
403-
async putAmazonsagemaker (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>
404-
async putAmazonsagemaker (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>
405-
async putAmazonsagemaker (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>
406-
async putAmazonsagemaker (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<any> {
403+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest | TB.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferencePutAmazonsagemakerResponse>
404+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest | TB.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.InferencePutAmazonsagemakerResponse, unknown>>
405+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest | TB.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptions): Promise<T.InferencePutAmazonsagemakerResponse>
406+
async putAmazonsagemaker (this: That, params: T.InferencePutAmazonsagemakerRequest | TB.InferencePutAmazonsagemakerRequest, options?: TransportRequestOptions): Promise<any> {
407407
const acceptedPath: string[] = ['task_type', 'amazonsagemaker_inference_id']
408+
const acceptedBody: string[] = ['chunking_settings', 'service', 'service_settings', 'task_settings']
408409
const querystring: Record<string, any> = {}
409-
const body = undefined
410+
// @ts-expect-error
411+
const userBody: any = params?.body
412+
let body: Record<string, any> | string
413+
if (typeof userBody === 'string') {
414+
body = userBody
415+
} else {
416+
body = userBody != null ? { ...userBody } : undefined
417+
}
410418

411-
params = params ?? {}
412419
for (const key in params) {
413-
if (acceptedPath.includes(key)) {
420+
if (acceptedBody.includes(key)) {
421+
body = body ?? {}
422+
// @ts-expect-error
423+
body[key] = params[key]
424+
} else if (acceptedPath.includes(key)) {
414425
continue
415426
} else if (key !== 'body') {
427+
// @ts-expect-error
416428
querystring[key] = params[key]
417429
}
418430
}

src/api/types.ts

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13353,6 +13353,31 @@ export interface InferenceAmazonBedrockTaskSettings {
1335313353

1335413354
export type InferenceAmazonBedrockTaskType = 'completion' | 'text_embedding'
1335513355

13356+
export type InferenceAmazonSageMakerApi = 'openai' | 'elastic'
13357+
13358+
export interface InferenceAmazonSageMakerServiceSettings {
13359+
access_key: string
13360+
endpoint_name: string
13361+
api: InferenceAmazonSageMakerApi
13362+
region: string
13363+
secret_key: string
13364+
target_model?: string
13365+
target_container_hostname?: string
13366+
inference_component_name?: string
13367+
batch_size?: integer
13368+
dimensions?: integer
13369+
}
13370+
13371+
export type InferenceAmazonSageMakerServiceType = 'amazon_sagemaker'
13372+
13373+
export interface InferenceAmazonSageMakerTaskSettings {
13374+
custom_attributes?: string
13375+
enable_explanations?: string
13376+
inference_id?: string
13377+
session_id?: string
13378+
target_variant?: string
13379+
}
13380+
1335613381
export interface InferenceAnthropicServiceSettings {
1335713382
api_key: string
1335813383
model_id: string
@@ -13610,6 +13635,11 @@ export interface InferenceInferenceEndpointInfoAmazonBedrock extends InferenceIn
1361013635
task_type: InferenceTaskTypeAmazonBedrock
1361113636
}
1361213637

13638+
export interface InferenceInferenceEndpointInfoAmazonSageMaker extends InferenceInferenceEndpoint {
13639+
inference_id: string
13640+
task_type: InferenceTaskTypeAmazonSageMaker
13641+
}
13642+
1361313643
export interface InferenceInferenceEndpointInfoAnthropic extends InferenceInferenceEndpoint {
1361413644
inference_id: string
1361513645
task_type: InferenceTaskTypeAnthropic
@@ -13802,6 +13832,8 @@ export type InferenceTaskTypeAlibabaCloudAI = 'text_embedding' | 'rerank' | 'com
1380213832

1380313833
export type InferenceTaskTypeAmazonBedrock = 'text_embedding' | 'completion'
1380413834

13835+
export type InferenceTaskTypeAmazonSageMaker = 'text_embedding' | 'completion' | 'chat_completion' | 'sparse_embedding' | 'rerank'
13836+
1380513837
export type InferenceTaskTypeAnthropic = 'completion'
1380613838

1380713839
export type InferenceTaskTypeAzureAIStudio = 'text_embedding' | 'completion'
@@ -13970,6 +14002,18 @@ export interface InferencePutAmazonbedrockRequest extends RequestBase {
1397014002

1397114003
export type InferencePutAmazonbedrockResponse = InferenceInferenceEndpointInfoAmazonBedrock
1397214004

14005+
export interface InferencePutAmazonsagemakerRequest extends RequestBase {
14006+
task_type: InferenceTaskTypeAmazonSageMaker
14007+
amazonsagemaker_inference_id: Id
14008+
timeout?: Duration
14009+
chunking_settings?: InferenceInferenceChunkingSettings
14010+
service: InferenceAmazonSageMakerServiceType
14011+
service_settings: InferenceAmazonSageMakerServiceSettings
14012+
task_settings?: InferenceAmazonSageMakerTaskSettings
14013+
}
14014+
14015+
export type InferencePutAmazonsagemakerResponse = InferenceInferenceEndpointInfoAmazonSageMaker
14016+
1397314017
export interface InferencePutAnthropicRequest extends RequestBase {
1397414018
task_type: InferenceAnthropicTaskType
1397514019
anthropic_inference_id: Id

src/api/typesWithBodyKey.ts

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13595,6 +13595,31 @@ export interface InferenceAmazonBedrockTaskSettings {
1359513595

1359613596
export type InferenceAmazonBedrockTaskType = 'completion' | 'text_embedding'
1359713597

13598+
export type InferenceAmazonSageMakerApi = 'openai' | 'elastic'
13599+
13600+
export interface InferenceAmazonSageMakerServiceSettings {
13601+
access_key: string
13602+
endpoint_name: string
13603+
api: InferenceAmazonSageMakerApi
13604+
region: string
13605+
secret_key: string
13606+
target_model?: string
13607+
target_container_hostname?: string
13608+
inference_component_name?: string
13609+
batch_size?: integer
13610+
dimensions?: integer
13611+
}
13612+
13613+
export type InferenceAmazonSageMakerServiceType = 'amazon_sagemaker'
13614+
13615+
export interface InferenceAmazonSageMakerTaskSettings {
13616+
custom_attributes?: string
13617+
enable_explanations?: string
13618+
inference_id?: string
13619+
session_id?: string
13620+
target_variant?: string
13621+
}
13622+
1359813623
export interface InferenceAnthropicServiceSettings {
1359913624
api_key: string
1360013625
model_id: string
@@ -13852,6 +13877,11 @@ export interface InferenceInferenceEndpointInfoAmazonBedrock extends InferenceIn
1385213877
task_type: InferenceTaskTypeAmazonBedrock
1385313878
}
1385413879

13880+
export interface InferenceInferenceEndpointInfoAmazonSageMaker extends InferenceInferenceEndpoint {
13881+
inference_id: string
13882+
task_type: InferenceTaskTypeAmazonSageMaker
13883+
}
13884+
1385513885
export interface InferenceInferenceEndpointInfoAnthropic extends InferenceInferenceEndpoint {
1385613886
inference_id: string
1385713887
task_type: InferenceTaskTypeAnthropic
@@ -14044,6 +14074,8 @@ export type InferenceTaskTypeAlibabaCloudAI = 'text_embedding' | 'rerank' | 'com
1404414074

1404514075
export type InferenceTaskTypeAmazonBedrock = 'text_embedding' | 'completion'
1404614076

14077+
export type InferenceTaskTypeAmazonSageMaker = 'text_embedding' | 'completion' | 'chat_completion' | 'sparse_embedding' | 'rerank'
14078+
1404714079
export type InferenceTaskTypeAnthropic = 'completion'
1404814080

1404914081
export type InferenceTaskTypeAzureAIStudio = 'text_embedding' | 'completion'
@@ -14226,6 +14258,21 @@ export interface InferencePutAmazonbedrockRequest extends RequestBase {
1422614258

1422714259
export type InferencePutAmazonbedrockResponse = InferenceInferenceEndpointInfoAmazonBedrock
1422814260

14261+
export interface InferencePutAmazonsagemakerRequest extends RequestBase {
14262+
task_type: InferenceTaskTypeAmazonSageMaker
14263+
amazonsagemaker_inference_id: Id
14264+
timeout?: Duration
14265+
/** @deprecated The use of the 'body' key has been deprecated, move the nested keys to the top level object. */
14266+
body?: {
14267+
chunking_settings?: InferenceInferenceChunkingSettings
14268+
service: InferenceAmazonSageMakerServiceType
14269+
service_settings: InferenceAmazonSageMakerServiceSettings
14270+
task_settings?: InferenceAmazonSageMakerTaskSettings
14271+
}
14272+
}
14273+
14274+
export type InferencePutAmazonsagemakerResponse = InferenceInferenceEndpointInfoAmazonSageMaker
14275+
1422914276
export interface InferencePutAnthropicRequest extends RequestBase {
1423014277
task_type: InferenceAnthropicTaskType
1423114278
anthropic_inference_id: Id

0 commit comments

Comments
 (0)