Skip to content

Commit 4756247

Browse files
release: 1.107.2 (#2624)
* chore(api): Minor docs and type updates for realtime * codegen metadata * chore(tests): simplify `get_platform` test `nest_asyncio` is archived and broken on some platforms so it's not worth keeping in our test suite. * release: 1.107.2 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com>
1 parent 847ff0b commit 4756247

21 files changed

+344
-187
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "1.107.1"
2+
".": "1.107.2"
33
}

.stats.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 118
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-16cb18bed32bae8c5840fb39a1bf664026cc40463ad0c487dcb0df1bd3d72db0.yml
3-
openapi_spec_hash: 4cb51b22f98dee1a90bc7add82d1d132
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-94b1e3cb0bdc616ff0c2f267c33dadd95f133b1f64e647aab6c64afb292b2793.yml
3+
openapi_spec_hash: 2395319ac9befd59b6536ae7f9564a05
44
config_hash: 930dac3aa861344867e4ac84f037b5df

CHANGELOG.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
# Changelog
22

3+
## 1.107.2 (2025-09-12)
4+
5+
Full Changelog: [v1.107.1...v1.107.2](https://github.com/openai/openai-python/compare/v1.107.1...v1.107.2)
6+
7+
### Chores
8+
9+
* **api:** Minor docs and type updates for realtime ([ab6a10d](https://github.com/openai/openai-python/commit/ab6a10da4ed7e6386695b6f5f29149d4870f85c9))
10+
* **tests:** simplify `get_platform` test ([01f03e0](https://github.com/openai/openai-python/commit/01f03e0ad1f9ab3f2ed8b7c13d652263c6d06378))
11+
312
## 1.107.1 (2025-09-10)
413

514
Full Changelog: [v1.107.0...v1.107.1](https://github.com/openai/openai-python/compare/v1.107.0...v1.107.1)

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai"
3-
version = "1.107.1"
3+
version = "1.107.2"
44
description = "The official Python library for the openai API"
55
dynamic = ["readme"]
66
license = "Apache-2.0"

requirements-dev.lock

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ filelock==3.12.4
7070
frozenlist==1.7.0
7171
# via aiohttp
7272
# via aiosignal
73-
griffe==1.14.0
73+
griffe==1.13.0
7474
h11==0.16.0
7575
# via httpcore
7676
httpcore==1.0.9
@@ -108,7 +108,6 @@ multidict==6.5.0
108108
mypy==1.14.1
109109
mypy-extensions==1.0.0
110110
# via mypy
111-
nest-asyncio==1.6.0
112111
nodeenv==1.8.0
113112
# via pyright
114113
nox==2023.4.22

src/openai/_version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
22

33
__title__ = "openai"
4-
__version__ = "1.107.1" # x-release-please-version
4+
__version__ = "1.107.2" # x-release-please-version

src/openai/resources/responses/responses.py

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -288,10 +288,10 @@ def create(
288288
289289
truncation: The truncation strategy to use for the model response.
290290
291-
- `auto`: If the context of this response and previous ones exceeds the model's
292-
context window size, the model will truncate the response to fit the context
293-
window by dropping input items in the middle of the conversation.
294-
- `disabled` (default): If a model response will exceed the context window size
291+
- `auto`: If the input to this Response exceeds the model's context window size,
292+
the model will truncate the response to fit the context window by dropping
293+
items from the beginning of the conversation.
294+
- `disabled` (default): If the input size will exceed the context window size
295295
for a model, the request will fail with a 400 error.
296296
297297
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -527,10 +527,10 @@ def create(
527527
528528
truncation: The truncation strategy to use for the model response.
529529
530-
- `auto`: If the context of this response and previous ones exceeds the model's
531-
context window size, the model will truncate the response to fit the context
532-
window by dropping input items in the middle of the conversation.
533-
- `disabled` (default): If a model response will exceed the context window size
530+
- `auto`: If the input to this Response exceeds the model's context window size,
531+
the model will truncate the response to fit the context window by dropping
532+
items from the beginning of the conversation.
533+
- `disabled` (default): If the input size will exceed the context window size
534534
for a model, the request will fail with a 400 error.
535535
536536
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -766,10 +766,10 @@ def create(
766766
767767
truncation: The truncation strategy to use for the model response.
768768
769-
- `auto`: If the context of this response and previous ones exceeds the model's
770-
context window size, the model will truncate the response to fit the context
771-
window by dropping input items in the middle of the conversation.
772-
- `disabled` (default): If a model response will exceed the context window size
769+
- `auto`: If the input to this Response exceeds the model's context window size,
770+
the model will truncate the response to fit the context window by dropping
771+
items from the beginning of the conversation.
772+
- `disabled` (default): If the input size will exceed the context window size
773773
for a model, the request will fail with a 400 error.
774774
775775
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1719,10 +1719,10 @@ async def create(
17191719
17201720
truncation: The truncation strategy to use for the model response.
17211721
1722-
- `auto`: If the context of this response and previous ones exceeds the model's
1723-
context window size, the model will truncate the response to fit the context
1724-
window by dropping input items in the middle of the conversation.
1725-
- `disabled` (default): If a model response will exceed the context window size
1722+
- `auto`: If the input to this Response exceeds the model's context window size,
1723+
the model will truncate the response to fit the context window by dropping
1724+
items from the beginning of the conversation.
1725+
- `disabled` (default): If the input size will exceed the context window size
17261726
for a model, the request will fail with a 400 error.
17271727
17281728
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -1958,10 +1958,10 @@ async def create(
19581958
19591959
truncation: The truncation strategy to use for the model response.
19601960
1961-
- `auto`: If the context of this response and previous ones exceeds the model's
1962-
context window size, the model will truncate the response to fit the context
1963-
window by dropping input items in the middle of the conversation.
1964-
- `disabled` (default): If a model response will exceed the context window size
1961+
- `auto`: If the input to this Response exceeds the model's context window size,
1962+
the model will truncate the response to fit the context window by dropping
1963+
items from the beginning of the conversation.
1964+
- `disabled` (default): If the input size will exceed the context window size
19651965
for a model, the request will fail with a 400 error.
19661966
19671967
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use
@@ -2197,10 +2197,10 @@ async def create(
21972197
21982198
truncation: The truncation strategy to use for the model response.
21992199
2200-
- `auto`: If the context of this response and previous ones exceeds the model's
2201-
context window size, the model will truncate the response to fit the context
2202-
window by dropping input items in the middle of the conversation.
2203-
- `disabled` (default): If a model response will exceed the context window size
2200+
- `auto`: If the input to this Response exceeds the model's context window size,
2201+
the model will truncate the response to fit the context window by dropping
2202+
items from the beginning of the conversation.
2203+
- `disabled` (default): If the input size will exceed the context window size
22042204
for a model, the request will fail with a 400 error.
22052205
22062206
user: This field is being replaced by `safety_identifier` and `prompt_cache_key`. Use

src/openai/types/realtime/input_audio_buffer_timeout_triggered.py

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,16 @@
99

1010
class InputAudioBufferTimeoutTriggered(BaseModel):
1111
audio_end_ms: int
12-
"""Millisecond offset where speech ended within the buffered audio."""
12+
"""
13+
Millisecond offset of audio written to the input audio buffer at the time the
14+
timeout was triggered.
15+
"""
1316

1417
audio_start_ms: int
15-
"""Millisecond offset where speech started within the buffered audio."""
18+
"""
19+
Millisecond offset of audio written to the input audio buffer that was after the
20+
playback time of the last model response.
21+
"""
1622

1723
event_id: str
1824
"""The unique ID of the server event."""

src/openai/types/realtime/realtime_audio_config_input.py

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,11 @@ class RealtimeAudioConfigInput(BaseModel):
4949
"""Configuration for turn detection, ether Server VAD or Semantic VAD.
5050
5151
This can be set to `null` to turn off, in which case the client must manually
52-
trigger model response. Server VAD means that the model will detect the start
53-
and end of speech based on audio volume and respond at the end of user speech.
52+
trigger model response.
53+
54+
Server VAD means that the model will detect the start and end of speech based on
55+
audio volume and respond at the end of user speech.
56+
5457
Semantic VAD is more advanced and uses a turn detection model (in conjunction
5558
with VAD) to semantically estimate whether the user has finished speaking, then
5659
dynamically sets a timeout based on this probability. For example, if user audio

src/openai/types/realtime/realtime_audio_config_input_param.py

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@
22

33
from __future__ import annotations
44

5+
from typing import Optional
56
from typing_extensions import TypedDict
67

78
from .noise_reduction_type import NoiseReductionType
@@ -46,12 +47,15 @@ class RealtimeAudioConfigInputParam(TypedDict, total=False):
4647
transcription, these offer additional guidance to the transcription service.
4748
"""
4849

49-
turn_detection: RealtimeAudioInputTurnDetectionParam
50+
turn_detection: Optional[RealtimeAudioInputTurnDetectionParam]
5051
"""Configuration for turn detection, ether Server VAD or Semantic VAD.
5152
5253
This can be set to `null` to turn off, in which case the client must manually
53-
trigger model response. Server VAD means that the model will detect the start
54-
and end of speech based on audio volume and respond at the end of user speech.
54+
trigger model response.
55+
56+
Server VAD means that the model will detect the start and end of speech based on
57+
audio volume and respond at the end of user speech.
58+
5559
Semantic VAD is more advanced and uses a turn detection model (in conjunction
5660
with VAD) to semantically estimate whether the user has finished speaking, then
5761
dynamically sets a timeout based on this probability. For example, if user audio

0 commit comments

Comments
 (0)