Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2,194 changes: 1,065 additions & 1,129 deletions .mock/definition/empathic-voice/__package__.yml

Large diffs are not rendered by default.

156 changes: 78 additions & 78 deletions .mock/definition/empathic-voice/chatGroups.yml
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,84 @@ service:
metadata: null
config: null
active: false
get-audio:
path: /v0/evi/chat_groups/{id}/audio
method: GET
docs: >-
Fetches a paginated list of audio for each **Chat** within the specified
**Chat Group**. For more details, see our guide on audio reconstruction
[here](/docs/speech-to-speech-evi/faq#can-i-access-the-audio-of-previous-conversations-with-evi).
source:
openapi: evi-openapi.json
path-parameters:
id:
type: string
docs: Identifier for a Chat Group. Formatted as a UUID.
display-name: Get chat group audio
request:
name: ChatGroupsGetAudioRequest
query-parameters:
page_number:
type: optional<integer>
default: 0
docs: >-
Specifies the page number to retrieve, enabling pagination.


This parameter uses zero-based indexing. For example, setting
`page_number` to 0 retrieves the first page of results (items 0-9
if `page_size` is 10), setting `page_number` to 1 retrieves the
second page (items 10-19), and so on. Defaults to 0, which
retrieves the first page.
page_size:
type: optional<integer>
docs: >-
Specifies the maximum number of results to include per page,
enabling pagination. The value must be between 1 and 100,
inclusive.


For example, if `page_size` is set to 10, each page will include
up to 10 items. Defaults to 10.
ascending_order:
type: optional<boolean>
docs: >-
Specifies the sorting order of the results based on their creation
date. Set to true for ascending order (chronological, with the
oldest records first) and false for descending order
(reverse-chronological, with the newest records first). Defaults
to true.
response:
docs: Success
type: root.ReturnChatGroupPagedAudioReconstructions
status-code: 200
errors:
- root.BadRequestError
examples:
- path-parameters:
id: 369846cf-6ad5-404d-905e-a8acb5cdfc78
query-parameters:
page_number: 0
page_size: 10
ascending_order: true
response:
body:
id: 369846cf-6ad5-404d-905e-a8acb5cdfc78
user_id: e6235940-cfda-3988-9147-ff531627cf42
num_chats: 1
page_number: 0
page_size: 10
total_pages: 1
pagination_direction: ASC
audio_reconstructions_page:
- id: 470a49f6-1dec-4afe-8b61-035d3b2d63b0
user_id: e6235940-cfda-3988-9147-ff531627cf42
status: COMPLETE
filename: >-
e6235940-cfda-3988-9147-ff531627cf42/470a49f6-1dec-4afe-8b61-035d3b2d63b0/reconstructed_audio.mp4
modified_at: 1729875432555
signed_audio_url: https://storage.googleapis.com/...etc.
signed_url_expiration_timestamp_millis: 1730232816964
list-chat-group-events:
path: /v0/evi/chat_groups/{id}/events
method: GET
Expand Down Expand Up @@ -541,83 +619,5 @@ service:
0.022247314453125, "Tiredness": 0.0194549560546875,
"Triumph": 0.04107666015625}
metadata: ''
get-audio:
path: /v0/evi/chat_groups/{id}/audio
method: GET
docs: >-
Fetches a paginated list of audio for each **Chat** within the specified
**Chat Group**. For more details, see our guide on audio reconstruction
[here](/docs/speech-to-speech-evi/faq#can-i-access-the-audio-of-previous-conversations-with-evi).
source:
openapi: evi-openapi.json
path-parameters:
id:
type: string
docs: Identifier for a Chat Group. Formatted as a UUID.
display-name: Get chat group audio
request:
name: ChatGroupsGetAudioRequest
query-parameters:
page_number:
type: optional<integer>
default: 0
docs: >-
Specifies the page number to retrieve, enabling pagination.


This parameter uses zero-based indexing. For example, setting
`page_number` to 0 retrieves the first page of results (items 0-9
if `page_size` is 10), setting `page_number` to 1 retrieves the
second page (items 10-19), and so on. Defaults to 0, which
retrieves the first page.
page_size:
type: optional<integer>
docs: >-
Specifies the maximum number of results to include per page,
enabling pagination. The value must be between 1 and 100,
inclusive.


For example, if `page_size` is set to 10, each page will include
up to 10 items. Defaults to 10.
ascending_order:
type: optional<boolean>
docs: >-
Specifies the sorting order of the results based on their creation
date. Set to true for ascending order (chronological, with the
oldest records first) and false for descending order
(reverse-chronological, with the newest records first). Defaults
to true.
response:
docs: Success
type: root.ReturnChatGroupPagedAudioReconstructions
status-code: 200
errors:
- root.BadRequestError
examples:
- path-parameters:
id: 369846cf-6ad5-404d-905e-a8acb5cdfc78
query-parameters:
page_number: 0
page_size: 10
ascending_order: true
response:
body:
id: 369846cf-6ad5-404d-905e-a8acb5cdfc78
user_id: e6235940-cfda-3988-9147-ff531627cf42
num_chats: 1
page_number: 0
page_size: 10
total_pages: 1
pagination_direction: ASC
audio_reconstructions_page:
- id: 470a49f6-1dec-4afe-8b61-035d3b2d63b0
user_id: e6235940-cfda-3988-9147-ff531627cf42
status: COMPLETE
filename: >-
e6235940-cfda-3988-9147-ff531627cf42/470a49f6-1dec-4afe-8b61-035d3b2d63b0/reconstructed_audio.mp4
modified_at: 1729875432555
signed_audio_url: https://storage.googleapis.com/...etc.
signed_url_expiration_timestamp_millis: 1730232816964
source:
openapi: evi-openapi.json
100 changes: 50 additions & 50 deletions .mock/definition/empathic-voice/configs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -136,21 +136,25 @@ service:
name: PostedConfig
body:
properties:
builtin_tools:
type: optional<list<optional<root.PostedBuiltinTool>>>
docs: List of built-in tools associated with this Config.
ellm_model:
type: optional<root.PostedEllmModel>
docs: >-
The eLLM setup associated with this Config.


Hume's eLLM (empathic Large Language Model) is a multimodal
language model that takes into account both expression measures
and language. The eLLM generates short, empathic language
responses and guides text-to-speech (TTS) prosody.
event_messages: optional<root.PostedEventMessageSpecs>
evi_version:
type: string
docs: >-
EVI version to use. Only versions `3` and `4-mini` are
supported.
name:
type: string
docs: Name applied to all versions of a particular Config.
version_description:
type: optional<string>
docs: An optional description of the Config version.
prompt: optional<root.PostedConfigPromptSpec>
voice:
type: optional<root.VoiceRef>
docs: A voice specification associated with this Config.
language_model:
type: optional<root.PostedLanguageModel>
docs: >-
Expand All @@ -161,31 +165,27 @@ service:
from EVI. Choosing an appropriate supplemental language model
for your use case is crucial for generating fast, high-quality
responses from EVI.
ellm_model:
type: optional<root.PostedEllmModel>
docs: >-
The eLLM setup associated with this Config.


Hume's eLLM (empathic Large Language Model) is a multimodal
language model that takes into account both expression measures
and language. The eLLM generates short, empathic language
responses and guides text-to-speech (TTS) prosody.
tools:
type: optional<list<optional<root.PostedUserDefinedToolSpec>>>
docs: List of user-defined tools associated with this Config.
builtin_tools:
type: optional<list<optional<root.PostedBuiltinTool>>>
docs: List of built-in tools associated with this Config.
event_messages: optional<root.PostedEventMessageSpecs>
name:
type: string
docs: Name applied to all versions of a particular Config.
nudges:
type: optional<root.PostedNudgeSpec>
docs: >-
Configures nudges, brief audio prompts that can guide
conversations when users pause or need encouragement to continue
speaking. Nudges help create more natural, flowing interactions
by providing gentle conversational cues.
prompt: optional<root.PostedConfigPromptSpec>
timeouts: optional<root.PostedTimeoutSpecs>
tools:
type: optional<list<optional<root.PostedUserDefinedToolSpec>>>
docs: List of user-defined tools associated with this Config.
version_description:
type: optional<string>
docs: An optional description of the Config version.
voice:
type: optional<root.VoiceRef>
docs: A voice specification associated with this Config.
webhooks:
type: optional<list<optional<root.PostedWebhookSpec>>>
docs: Webhook config specifications for each subscriber.
Expand Down Expand Up @@ -409,16 +409,23 @@ service:
name: PostedConfigVersion
body:
properties:
builtin_tools:
type: optional<list<optional<root.PostedBuiltinTool>>>
docs: List of built-in tools associated with this Config version.
ellm_model:
type: optional<root.PostedEllmModel>
docs: >-
The eLLM setup associated with this Config version.


Hume's eLLM (empathic Large Language Model) is a multimodal
language model that takes into account both expression measures
and language. The eLLM generates short, empathic language
responses and guides text-to-speech (TTS) prosody.
event_messages: optional<root.PostedEventMessageSpecs>
evi_version:
type: string
docs: The version of the EVI used with this config.
version_description:
type: optional<string>
docs: An optional description of the Config version.
prompt: optional<root.PostedConfigPromptSpec>
voice:
type: optional<root.VoiceRef>
docs: A voice specification associated with this Config version.
language_model:
type: optional<root.PostedLanguageModel>
docs: >-
Expand All @@ -430,25 +437,18 @@ service:
from EVI. Choosing an appropriate supplemental language model
for your use case is crucial for generating fast, high-quality
responses from EVI.
ellm_model:
type: optional<root.PostedEllmModel>
docs: >-
The eLLM setup associated with this Config version.


Hume's eLLM (empathic Large Language Model) is a multimodal
language model that takes into account both expression measures
and language. The eLLM generates short, empathic language
responses and guides text-to-speech (TTS) prosody.
nudges: optional<root.PostedNudgeSpec>
prompt: optional<root.PostedConfigPromptSpec>
timeouts: optional<root.PostedTimeoutSpecs>
tools:
type: optional<list<optional<root.PostedUserDefinedToolSpec>>>
docs: List of user-defined tools associated with this Config version.
builtin_tools:
type: optional<list<optional<root.PostedBuiltinTool>>>
docs: List of built-in tools associated with this Config version.
event_messages: optional<root.PostedEventMessageSpecs>
timeouts: optional<root.PostedTimeoutSpecs>
nudges: optional<root.PostedNudgeSpec>
version_description:
type: optional<string>
docs: An optional description of the Config version.
voice:
type: optional<root.VoiceRef>
docs: A voice specification associated with this Config version.
webhooks:
type: optional<list<optional<root.PostedWebhookSpec>>>
docs: Webhook config specifications for each subscriber.
Expand Down
12 changes: 6 additions & 6 deletions .mock/definition/empathic-voice/prompts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -118,9 +118,6 @@ service:
name:
type: string
docs: Name applied to all versions of a particular Prompt.
version_description:
type: optional<string>
docs: An optional description of the Prompt version.
text:
type: string
docs: >-
Expand All @@ -137,6 +134,9 @@ service:

For help writing a system prompt, see our [Prompting
Guide](/docs/speech-to-speech-evi/guides/prompting).
version_description:
type: optional<string>
docs: An optional description of the Prompt version.
content-type: application/json
response:
docs: Created
Expand Down Expand Up @@ -268,9 +268,6 @@ service:
name: PostedPromptVersion
body:
properties:
version_description:
type: optional<string>
docs: An optional description of the Prompt version.
text:
type: string
docs: >-
Expand All @@ -288,6 +285,9 @@ service:

For help writing a system prompt, see our [Prompting
Guide](/docs/speech-to-speech-evi/guides/prompting).
version_description:
type: optional<string>
docs: An optional description of the Prompt version.
content-type: application/json
response:
docs: Created
Expand Down
Loading
Loading