Skip to content

Commit 9474719

Browse files
Integrated latest changes at 10-26-2025 7:30:14 PM
1 parent f45baec commit 9474719

File tree

8 files changed

+442
-1
lines changed

8 files changed

+442
-1
lines changed

ej2-react-toc.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -618,6 +618,7 @@
618618
<li><a href="/ej2-react/chat-ui/header">Header</a></li>
619619
<li><a href="/ej2-react/chat-ui/footer">Footer</a></li>
620620
<li><a href="/ej2-react/chat-ui/templates">Templates</a></li>
621+
<li><a href="/ej2-react/chat-ui/speech-to-text">Speech to Text</a></li>
621622
<li><a href="/ej2-react/chat-ui/appearance">Appearance</a></li>
622623
<li><a href="/ej2-react/chat-ui/globalization">Globalization</a></li>
623624
<li><a href="/ej2-react/chat-ui/accessibility">Accessibility</a></li>

ej2-react/ai-assistview/speech/speech-to-text.md

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,18 @@ Before integrating `Speech-to-Text`, ensure the following:
2424

2525
## Configure Speech-to-Text
2626

27-
To enable Speech-to-Text functionality, modify the `src/App.jsx` or `src/App.tsx` file to incorporate the Web Speech API. The [SpeechToText](https://ej2.syncfusion.com/react/documentation/speech-to-text/getting-started) component listens for microphone input, transcribes spoken words, and updates the AI AssistView's editable footer with the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView component.
27+
To enable Speech-to-Text functionality in the React AI AssistView component, update the `src/App.jsx` or `src/App.tsx` file to incorporate the Web Speech API.
28+
29+
The [SpeechToText](https://ej2.syncfusion.com/react/documentation/speech-to-text/getting-started) component listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the AI AssistView’s editable footer using the [footerTemplate](https://ej2.syncfusion.com/react/documentation/api/ai-assistview/#footertemplate) property to display the transcribed text. The transcribed text is then sent as a prompt to the Azure OpenAI service via the AI AssistView component.
30+
31+
### Configuration Options
32+
33+
* **[`lang`](https://ej2.syncfusion.com/react/documentation/api/speech-to-text/#lang)**: Specifies the language for speech recognition. For example:
34+
35+
* `en-US` for American English
36+
* `fr-FR` for French
37+
38+
* **[`allowInterimResults`](https://ej2.syncfusion.com/react/documentation/api/speech-to-text/#allowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
2839

2940
{% tabs %}
3041
{% highlight js tabtitle="app.jsx" %}
@@ -37,6 +48,14 @@ To enable Speech-to-Text functionality, modify the `src/App.jsx` or `src/App.tsx
3748

3849
{% previewsample "page.domainurl/code-snippet/ai-assistview/speech/stt" %}
3950

51+
## Error Handling
52+
53+
The `SpeechToText` component provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/react/documentation/speech-to-text/speech-recognition#error-handling) section in the documentation.
54+
55+
## Browser Compatibility
56+
57+
The `SpeechToText` component relies on the [Speech Recognition API](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/react/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
58+
4059
## See Also
4160

4261
* [Text-to-Speech](./text-to-speech.md)
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
---
2+
layout: post
3+
title: Speech-to-Text With React Chat UI component | Syncfusion
4+
description: Checkout and learn about configuration of Speech-to-Text With React Chat UI component of Syncfusion Essential JS 2 and more details.
5+
platform: ej2-react
6+
control: Chat UI
7+
documentation: ug
8+
domainurl: ##DomainURL##
9+
---
10+
11+
# Speech-to-Text in React Chat UI
12+
13+
The Syncfusion React Chat UI component integrates `Speech-to-Text` functionality through the browser's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API). This enables the conversion of spoken words into text using the device's microphone, allowing users to interact with the Chat UI through voice input.
14+
15+
## Configure Speech-to-Text
16+
17+
To enable Speech-to-Text functionality in the React Chat UI component, update the `src/App.jsx` or `src/App.tsx` file to incorporate the Web Speech API.
18+
19+
The [SpeechToText](https://ej2.syncfusion.com/react/documentation/speech-to-text/getting-started) component listens to audio input from the device’s microphone, transcribes spoken words into text, and updates the Chat UI’s editable footer using the [footerTemplate](https://ej2.syncfusion.com/react/documentation/api/chat-ui/#footertemplate) property to display the transcribed text. Once the transcription appears in the footer, users can send it as a message to others.
20+
21+
### Configuration Options
22+
23+
* **[`lang`](https://ej2.syncfusion.com/react/documentation/api/speech-to-text/#lang)**: Specifies the language for speech recognition. For example:
24+
25+
* `en-US` for American English
26+
* `fr-FR` for French
27+
28+
* **[`allowInterimResults`](https://ej2.syncfusion.com/react/documentation/api/speech-to-text/#allowInterimResults)**: Set to `true` to receive real-time (interim) recognition results, or `false` to receive only final results.
29+
30+
{% tabs %}
31+
{% highlight js tabtitle="app.jsx" %}
32+
{% include code-snippet/chat-ui/stt/app/index.jsx %}
33+
{% endhighlight %}
34+
{% highlight ts tabtitle="app.tsx" %}
35+
{% include code-snippet/chat-ui/stt/app/index.tsx %}
36+
{% endhighlight %}
37+
{% endtabs %}
38+
39+
{% previewsample "page.domainurl/code-snippet/chat-ui/stt" %}
40+
41+
## Error Handling
42+
43+
The `SpeechToText` component provides events to handle errors that may occur during speech recognition. For more information, refer to the [Error Handling](https://ej2.syncfusion.com/react/documentation/speech-to-text/speech-recognition#error-handling) section in the documentation.
44+
45+
## Browser Compatibility
46+
47+
The `SpeechToText` component relies on the [Speech Recognition API](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition), which has limited browser support. Refer to the [Browser Compatibility](https://ej2.syncfusion.com/react/documentation/speech-to-text/speech-recognition#browser-support) section for detailed information.
48+
49+
## See Also
50+
51+
* [Messages](./messages)
Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
import * as React from 'react';
2+
import * as ReactDOM from 'react-dom';
3+
import { ChatUIComponent, MessagesDirective, MessageDirective } from '@syncfusion/ej2-react-interactive-chat';
4+
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
5+
import { SpeechToTextComponent } from '@syncfusion/ej2-react-inputs';
6+
7+
function App() {
8+
const chatInstance = React.useRef(null);
9+
const speechToTextObj = React.useRef(null);
10+
const chatuiFooter = React.useRef(null);
11+
const chatuiSendButton = React.useRef(null);
12+
13+
const currentUserModel = {
14+
id: 'user1',
15+
user: 'Albert',
16+
};
17+
18+
const michaleUserModel = {
19+
id: 'user2',
20+
user: 'Michale Suyama',
21+
};
22+
23+
// Renders the footer template including editable input, speech-to-text component, and send button
24+
const footerTemplate = () => {
25+
return (
26+
<div className="e-footer-wrapper">
27+
<div id="chatui-footer" ref={chatuiFooter} className="content-editor" contentEditable="true" placeholder="Click to speak or start typing..." onInput={toggleButtons} onKeyDown={handleKeyDown}></div>
28+
<div className="option-container">
29+
<SpeechToTextComponent id="speechToText" ref={speechToTextObj} cssClass="e-flat" transcriptChanged={onTranscriptChange} onStop={onListeningStop} created={onCreated}/>
30+
<ButtonComponent id="chatui-sendButton" ref={chatuiSendButton} className="e-assist-send e-icons" onClick={sendIconClicked}/>
31+
</div>
32+
</div>
33+
);
34+
};
35+
36+
// Executes the current prompt from the footer input and clears it
37+
const sendIconClicked = () => {
38+
const editor = chatuiFooter.current; // Use .current for React refs
39+
const messageContent = editor?.innerText || '';
40+
if (messageContent.trim()) {
41+
chatInstance.current?.addMessage({
42+
author: currentUserModel,
43+
text: messageContent,
44+
});
45+
editor.innerText = '';
46+
toggleButtons(); // Update button visibility
47+
}
48+
};
49+
50+
// Updates the footer input with the latest speech transcript
51+
const onTranscriptChange = (args) => {
52+
if (chatuiFooter.current) {
53+
chatuiFooter.current.innerText = args.transcript;
54+
}
55+
};
56+
57+
// Toggles button visibility when speech-to-text listening stops
58+
const onListeningStop = () => {
59+
toggleButtons();
60+
};
61+
62+
// Initializes button visibility when the speech-to-text component is created
63+
const onCreated = () => {
64+
toggleButtons();
65+
};
66+
67+
// Toggles visibility of send and speech buttons based on whether the input has text
68+
const toggleButtons = () => {
69+
const chatuiFooterEle = chatuiFooter.current;
70+
const sendButtonEle = chatuiSendButton.current?.element;
71+
const speechButtonEle = speechToTextObj.current?.element;
72+
if (!chatuiFooterEle || !sendButtonEle || !speechButtonEle) {
73+
return;
74+
}
75+
const hasText = chatuiFooterEle.innerText.trim() !== '';
76+
sendButtonEle.classList.toggle('visible', hasText);
77+
speechButtonEle.classList.toggle('visible', !hasText);
78+
if (
79+
!hasText &&
80+
(chatuiFooterEle.innerHTML.trim() === '' ||
81+
chatuiFooterEle.innerHTML === '<br>')
82+
) {
83+
chatuiFooterEle.innerHTML = '';
84+
}
85+
};
86+
87+
const handleKeyDown = (event) => {
88+
if (event.key === 'Enter' && !event.shiftKey) {
89+
sendIconClicked();
90+
event.preventDefault();
91+
}
92+
};
93+
94+
React.useEffect(() => {
95+
// Defer toggleButtons until after mount to ensure refs are ready
96+
toggleButtons();
97+
}, []);
98+
99+
return (
100+
<div class="integration-speechtotext">
101+
<ChatUIComponent id="chatui" ref={chatInstance} user={currentUserModel} footerTemplate={footerTemplate}>
102+
<MessagesDirective>
103+
<MessageDirective text="Hi Michale, are we on track for the deadline?" author={currentUserModel}/>
104+
<MessageDirective text="Yes, the design phase is complete." author={michaleUserModel}/>
105+
<MessageDirective text="I’ll review it and send feedback by today." author={currentUserModel}/>
106+
</MessagesDirective>
107+
</ChatUIComponent>
108+
</div>
109+
);
110+
}
111+
112+
ReactDOM.render(<App />, document.getElementById('container'));
Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
import * as React from 'react';
2+
import * as ReactDOM from 'react-dom';
3+
import { ChatUIComponent, MessagesDirective, MessageDirective, UserModel } from '@syncfusion/ej2-react-interactive-chat';
4+
import { ButtonComponent } from '@syncfusion/ej2-react-buttons';
5+
import { SpeechToTextComponent } from '@syncfusion/ej2-react-inputs';
6+
7+
function App() {
8+
const chatInstance = React.useRef<ChatUIComponent>(null);
9+
const speechToTextObj = React.useRef<SpeechToTextComponent>(null);
10+
const chatuiFooter = React.useRef<HTMLDivElement>(null);
11+
const chatuiSendButton = React.useRef<ButtonComponent>(null);
12+
13+
const currentUserModel: UserModel = {
14+
id: 'user1',
15+
user: 'Albert',
16+
};
17+
18+
const michaleUserModel: UserModel = {
19+
id: 'user2',
20+
user: 'Michale Suyama',
21+
};
22+
23+
// Renders the footer template including editable input, speech-to-text component, and send button
24+
const footerTemplate = () => {
25+
return (
26+
<div className="e-footer-wrapper">
27+
<div id="chatui-footer" ref={chatuiFooter} className="content-editor" contentEditable="true" placeholder="Click to speak or start typing..." onInput={toggleButtons} onKeyDown={handleKeyDown}></div>
28+
<div className="option-container">
29+
<SpeechToTextComponent id="speechToText" ref={speechToTextObj} cssClass="e-flat" transcriptChanged={onTranscriptChange} onStop={onListeningStop} created={onCreated} />
30+
<ButtonComponent id="chatui-sendButton" ref={chatuiSendButton} className="e-assist-send e-icons" onClick={sendIconClicked} />
31+
</div>
32+
</div>
33+
);
34+
};
35+
36+
// Executes the current prompt from the footer input and clears it
37+
const sendIconClicked = () => {
38+
const editor = chatuiFooter.current; // Use .current for React refs
39+
const messageContent = editor?.innerText || '';
40+
if (messageContent.trim()) {
41+
chatInstance.current?.addMessage({
42+
author: currentUserModel,
43+
text: messageContent,
44+
});
45+
editor.innerText = '';
46+
toggleButtons(); // Update button visibility
47+
}
48+
};
49+
50+
// Updates the footer input with the latest speech transcript
51+
const onTranscriptChange = (args: any) => {
52+
if (chatuiFooter.current) {
53+
chatuiFooter.current.innerText = args.transcript;
54+
}
55+
};
56+
57+
// Toggles button visibility when speech-to-text listening stops
58+
const onListeningStop = () => {
59+
toggleButtons();
60+
};
61+
62+
// Initializes button visibility when the speech-to-text component is created
63+
const onCreated = () => {
64+
toggleButtons();
65+
};
66+
67+
// Toggles visibility of send and speech buttons based on whether the input has text
68+
const toggleButtons = () => {
69+
const chatuiFooterEle = chatuiFooter.current;
70+
const sendButtonEle = chatuiSendButton.current?.element;
71+
const speechButtonEle = speechToTextObj.current?.element;
72+
if (!chatuiFooterEle || !sendButtonEle || !speechButtonEle) {
73+
return;
74+
}
75+
const hasText = chatuiFooterEle.innerText.trim() !== '';
76+
sendButtonEle.classList.toggle('visible', hasText);
77+
speechButtonEle.classList.toggle('visible', !hasText);
78+
if (
79+
!hasText &&
80+
(chatuiFooterEle.innerHTML.trim() === '' ||
81+
chatuiFooterEle.innerHTML === '<br>')
82+
) {
83+
chatuiFooterEle.innerHTML = '';
84+
}
85+
};
86+
87+
const handleKeyDown = (event: any) => {
88+
if (event.key === 'Enter' && !event.shiftKey) {
89+
sendIconClicked();
90+
event.preventDefault();
91+
}
92+
};
93+
94+
React.useEffect(() => {
95+
// Defer toggleButtons until after mount to ensure refs are ready
96+
toggleButtons();
97+
}, []);
98+
99+
return (
100+
<div class="integration-speechtotext">
101+
<ChatUIComponent id="chatui" ref={chatInstance} user={currentUserModel} footerTemplate={footerTemplate}>
102+
<MessagesDirective>
103+
<MessageDirective text="Hi Michale, are we on track for the deadline?" author={currentUserModel}/>
104+
<MessageDirective text="Yes, the design phase is complete." author={michaleUserModel}/>
105+
<MessageDirective text="I’ll review it and send feedback by today." author={currentUserModel}/>
106+
</MessagesDirective>
107+
</ChatUIComponent>
108+
</div>
109+
);
110+
}
111+
112+
ReactDOM.render(<App />, document.getElementById('container'));
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
/* Represents the styles for loader */
2+
#loader {
3+
color: #008cff;
4+
height: 40px;
5+
left: 45%;
6+
position: absolute;
7+
top: 45%;
8+
width: 30%;
9+
}

0 commit comments

Comments
 (0)