You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
5.2.3 SpeechSynthesisUtterance Attributes text attribute
This attribute specifies the text to be synthesized and spoken for this utterance. This may be either plain text or a complete, well-formed SSML document. [SSML] For speech synthesis engines that do not support SSML, or only support certain tags, the user agent or speech engine must strip away the tags they do not support and speak the text. There may be a maximum length of the text, it may be limited to 32,767 characters.
For example, an SSML document could contain one or more <voice> elements where the expected result is rendering of different audio output for each <voice> element; the single SSML document could also contain one or more<audio> elements.
It is not clear how a complete SSML document should be parsed and processed when set as the value of the .text property of a single SpeechSynthesisUtterance() instance; in pertinent part, to meet the requirements of the process steps, or if a single SpeechSynthesisUtterance.text property is not capable of meeting the requirement of processing a complete SSML document.
XML parse: An XML parser is used to extract the document tree and content from the incoming text document. The structure, tags and attributes obtained in this step influence each of the following steps.
Structure analysis: The structure of a document influences the way in which a document should be read. For example, there are common speaking patterns associated with paragraphs and sentences.
Markup support: The p and s elements defined in SSML explicitly indicate document structures that affect the speech output.
Non-markup behavior: In documents and parts of documents where these elements are not used, the synthesis processor is responsible for inferring the structure by automated analysis of the text, often using punctuation and other language-specific data.
Text normalization: All written languages have special constructs that require a conversion of the written form (orthographic form) into the spoken form. Text normalization is an automated process of the synthesis processor that performs this conversion. For example, for English, when "$200" appears in a document it may be spoken as "two hundred dollars". Similarly, "1/2" may be spoken as "half", "January second", "February first", "one of two" and so on. By the end of this step the text to be spoken has been converted completely into tokens. The exact details of what constitutes a token are language-specific. In English, tokens are usually separated by white space and are typically words. For languages with different tokenization behavior, the term "word" in this specification is intended to mean an appropriately comparable unit. Tokens in SSML cannot span markup tags except within the token and w elements. A simple English example is "cupboard"; outside the token and w elements, the synthesis processor will treat this as the two tokens "cup" and "board" rather than as one token (word) with a pause in the middle. Breaking one token into multiple tokens this way will likely affect how the processor treats it.
Markup support: The say-as element can be used in the input document to explicitly indicate the presence and type of these constructs and to resolve ambiguities. The set of constructs that can be marked has not yet been defined but might include dates, times, numbers, acronyms, currency amounts and more. Note that many acronyms and abbreviations can be handled by the author via direct text replacement or by use of the sub element, e.g. "BBC" can be written as "B B C" and "AAA" can be written as "triple A". These replacement written forms will likely be pronounced as one would want the original acronyms to be pronounced. In the case of Japanese text, if you have a synthesis processor that supports both Kanji and kana, you may be able to use the sub element to identify whether 今日は should be spoken as きょうは ("kyou wa" = "today") or こんにちは ("konnichiwa" = "hello").
Non-markup behavior: For text content that is not marked with the say-as element the synthesis processor is expected to make a reasonable effort to automatically locate and convert these constructs to a speakable form. Because of inherent ambiguities (such as the "1/2" example above) and because of the wide range of possible constructs in any language, this process may introduce errors in the speech output and may cause different processors to render the same document differently.
Text-to-phoneme conversion: Once the synthesis processor has determined the set of tokens to be spoken, it must derive pronunciations for each token. Pronunciations may be conveniently described as sequences of phonemes, which are units of sound in a language that serve to distinguish one word from another. Each language (and sometimes each national or dialect variant of a language) has a specific phoneme set: e.g., most US English dialects have around 45 phonemes, Hawai'ian has between 12 and 18 (depending on who you ask), and some languages have more than 100! This conversion is made complex by a number of issues. One issue is that there are differences between written and spoken forms of a language, and these differences can lead to indeterminacy or ambiguity in the pronunciation of written words. For example, compared with their spoken form, words in Hebrew and Arabic are usually written with no vowels, or only a few vowels specified. In many languages the same written word may have many spoken forms. For example, in English, "read" may be spoken as "reed" (I will read the book) or "red" (I have read the book). Both human speakers and synthesis processors can pronounce these words correctly in context but may have difficulty without context (see "Non-markup behavior" below). Another issue is the handling of words with non-standard spellings or pronunciations. For example, an English synthesis processor will often have trouble determining how to speak some non-English-origin names, e.g. "Caius College" (pronounced "keys college") and President Tito (pronounced "sutto"), the president of the Republic of Kiribati (pronounced "kiribass").
Markup support: The phoneme element allows a phonemic sequence to be provided for any token or token sequence. This provides the content creator with explicit control over pronunciations. The say-as element might also be used to indicate that text is a proper name that may allow a synthesis processor to apply special rules to determine a pronunciation. The lexicon and lookup elements can be used to reference external definitions of pronunciations. These elements can be particularly useful for acronyms and abbreviations that the processor is unable to resolve via its own text normalization and that are not addressable via direct text substitution or the sub element (see paragraph 3, above).
Non-markup behavior: In the absence of a phoneme element the synthesis processor must apply automated capabilities to determine pronunciations. This is typically achieved by looking up tokens in a pronunciation dictionary (which may be language-dependent) and applying rules to determine other pronunciations. Synthesis processors are designed to perform text-to-phoneme conversions so most words of most documents can be handled automatically. As an alternative to relying upon the processor, authors may choose to perform some conversions themselves prior to encoding in SSML. Written words with indeterminate or ambiguous pronunciations could be replaced by words with an unambiguous pronunciation; for example, in the case of "read", "I will reed the book". Authors should be aware, however, that the resulting SSML document may not be optimal for visual display.
Prosody analysis: Prosody is the set of features of speech output that includes the pitch (also called intonation or melody), the timing (or rhythm), the pausing, the speaking rate, the emphasis on words and many other features. Producing human-like prosody is important for making speech sound natural and for correctly conveying the meaning of spoken language.
Markup support: The emphasis element, break element and prosody element may all be used by document creators to guide the synthesis processor in generating appropriate prosodic features in the speech output.
Non-markup behavior: In the absence of these elements, synthesis processors are expert (but not perfect) in automatically generating suitable prosody. This is achieved through analysis of the document structure, sentence syntax, and other information that can be inferred from the text input.
While most of the elements of SSML can be considered high-level in that they provide either content to be spoken or logical descriptions of style, the break and prosody elements mentioned above operate at a later point in the process and thus must coexist both with uses of the emphasis element and with the processor's own determinations of prosodic behavior. Unless specified in the appropriate sections, details of the interactions between the processor's own determinations and those provided by the author at this level are processor-specific. Authors are encouraged not to casually or arbitrarily mix these two levels of control.
Waveform production: The phonemes and prosodic information are used by the synthesis processor in the production of the audio waveform. There are many approaches to this processing step so there may be considerable processor-specific variation.
Markup support: The voice element allows the document creator to request a particular voice or specific voice qualities (e.g. a young male voice). The audio element allows for insertion of recorded audio data into the output stream, with optional control over the duration, sound level and playback speed of the recording. Rendering can be restricted to a subset of the document by using the trimming attributes on the speak element.
Non-markup behavior: The default volume/sound level, speed, and pitch/frequency of both voices and recorded audio in the document are that of the unmodified waveforms, whether they be voices or recordings.
The purpose of this issue is to request that tests for passing an SSML document to a single SpeechSynthesisUtterance instance .text be included in the Web Platform Tests suite, to either prove or disprove the possibility of implementing processing a complete SSML document that is set as value of a single SpeechSynthesisUtterance.text property.
Given example test code
class SpeechSynthesisSSMLParser {
constructor(ssml = `<?xml version="1.0"?>
<speak version="1.1"
xmlns="http://www.w3.org/2001/10/synthesis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.w3.org/2001/10/synthesis http://www.w3.org/TR/speech-synthesis11/synthesis.xsd"
xml:lang="en-US">hello world
</speak>`, utterance = new SpeechSynthesisUtterance()) {
if (ssml && typeof ssml === "string") {
ssml = (new DOMParser()).parseFromString(ssml, "application/xml");
}
if (ssml instanceof Document && ssml.documentElement.nodeName === "speak") {
utterance.lang = ssml.documentElement.attributes.getNamedItem("xml:lang").value;
utterance.text = ssml.documentElement.textContent;
} else {
throw new TypeError("Root element of SSML document should be <speak>")
}
return utterance;
}
}
const synth = window.speechSynthesis;
synth.speak(new SpeechSynthesisSSMLParser());
we can parse the .textContent of the root <speak> element, though am not able to yet conceptualize how it would be possible to process, for example, <audio>, <voice>, <prosody> and <break> elements without making multiple called to SpeechSynthesisUtterance() or repeatedly scheduling the .text property of the SpeechSynthesisUtterance instance to be set.
The text was updated successfully, but these errors were encountered:
According to the Web Speech API Specification
For example, an SSML document could contain one or more
<voice>
elements where the expected result is rendering of different audio output for each<voice>
element; the single SSML document could also contain one or more<audio>
elements.It is not clear how a complete SSML document should be parsed and processed when set as the value of the
.text
property of a singleSpeechSynthesisUtterance()
instance; in pertinent part, to meet the requirements of the process steps, or if a singleSpeechSynthesisUtterance
.text
property is not capable of meeting the requirement of processing a complete SSML document.1.2 Speech Synthesis Process Steps
The purpose of this issue is to request that tests for passing an SSML document to a single
SpeechSynthesisUtterance
instance.text
be included in the Web Platform Tests suite, to either prove or disprove the possibility of implementing processing a complete SSML document that is set as value of a singleSpeechSynthesisUtterance
.text
property.Given example test code
we can parse the
.textContent
of the root<speak>
element, though am not able to yet conceptualize how it would be possible to process, for example,<audio>
,<voice>
,<prosody>
and<break>
elements without making multiple called toSpeechSynthesisUtterance()
or repeatedly scheduling the.text
property of theSpeechSynthesisUtterance
instance to be set.The text was updated successfully, but these errors were encountered: