Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 14 additions & 14 deletions .mock/definition/empathic-voice/__package__.yml
Original file line number Diff line number Diff line change
Expand Up @@ -952,20 +952,20 @@ types:
interim:
type: boolean
docs: >-
Indicates if this message contains an immediate and unfinalized
transcript of the user’s audio input. If it does, words may be
repeated across successive `UserMessage` messages as our transcription
model becomes more confident about what was said with additional
context. Interim messages are useful to detect if the user is
interrupting during audio playback on the client. Even without a
finalized transcription, along with
[UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.UserInterruption.type)
messages, interim `UserMessages` are useful for detecting if the user
is interrupting during audio playback on the client, signaling to stop
playback in your application. Interim `UserMessages` will only be
received if the
[verbose_transcription](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription)
query parameter is set to `true` in the handshake request.
Indicates whether this `UserMessage` contains an interim (unfinalized)
transcript.


- `true`: the transcript is provisional; words may be repeated or
refined in subsequent `UserMessage` responses as additional audio is
processed.

- `false`: the transcript is final and complete.


Interim transcripts are only sent when the
[`verbose_transcription`](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription)
query parameter is set to `true` in the initial handshake.
message:
type: ChatMessage
docs: Transcript of the message.
Expand Down
53 changes: 44 additions & 9 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name = "hume"

[tool.poetry]
name = "hume"
version = "0.10.1"
version = "0.10.2"
description = "A Python SDK for Hume AI"
readme = "README.md"
authors = []
Expand Down
7 changes: 6 additions & 1 deletion src/hume/empathic_voice/types/user_message.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,12 @@ class UserMessage(UniversalBaseModel):

interim: bool = pydantic.Field()
"""
Indicates if this message contains an immediate and unfinalized transcript of the user’s audio input. If it does, words may be repeated across successive `UserMessage` messages as our transcription model becomes more confident about what was said with additional context. Interim messages are useful to detect if the user is interrupting during audio playback on the client. Even without a finalized transcription, along with [UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.UserInterruption.type) messages, interim `UserMessages` are useful for detecting if the user is interrupting during audio playback on the client, signaling to stop playback in your application. Interim `UserMessages` will only be received if the [verbose_transcription](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription) query parameter is set to `true` in the handshake request.
Indicates whether this `UserMessage` contains an interim (unfinalized) transcript.

- `true`: the transcript is provisional; words may be repeated or refined in subsequent `UserMessage` responses as additional audio is processed.
- `false`: the transcript is final and complete.

Interim transcripts are only sent when the [`verbose_transcription`](/reference/empathic-voice-interface-evi/chat/chat#request.query.verbose_transcription) query parameter is set to `true` in the initial handshake.
"""

message: ChatMessage = pydantic.Field()
Expand Down