Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
yrw-google authored Mar 4, 2025
2 parents 35dd807 + c0694cb commit 682fb24
Showing 1 changed file with 17 additions and 8 deletions.
25 changes: 17 additions & 8 deletions index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Former Editor: Hans Wennborg, Google
Abstract: This specification defines a JavaScript API to enable web developers to incorporate speech recognition and synthesis into their web pages.
Abstract: It enables developers to use scripting to generate text-to-speech output and to use speech recognition as an input for forms, continuous dictation and control.
Abstract: The JavaScript API allows web pages to control activation and timing and to handle results and alternatives.
Markup Shorthands:css no, markdown yes, dfn yes
</pre>

<pre class=biblio>
Expand Down Expand Up @@ -99,7 +100,7 @@ This does not preclude adding support for this as a future API enhancement, and
User consent can include, for example:
<ul>
<li>User click on a visible speech input element which has an obvious graphical representation showing that it will start speech input.</li>
<li>Accepting a permission prompt shown as the result of a call to <a method for=SpeechRecognition>start()</a>.</li>
<li>Accepting a permission prompt shown as the result of a call to {{SpeechRecognition/start()}}.</li>
<li>Consent previously granted to always allow speech input for this web page.</li>
</ul>
</li>
Expand Down Expand Up @@ -316,17 +317,25 @@ See <a href="https://lists.w3.org/Archives/Public/public-speech-api/2012Sep/0072
<dl>
<dt><dfn method for=SpeechRecognition>start()</dfn> method</dt>
<dd>
1. Let <var>requestMicrophonePermission</var> to <code>true</code>.
1. Run the <a>start session algorithm</a> with <var>requestMicrophonePermission</var>.
Start the speech recognition process, directly from a microphone on the device.
When invoked, run the following steps:

1. Let |requestMicrophonePermission| be a boolan variable set to to `true`.
1. Run the [=start session algorithm=] with |requestMicrophonePermission|.
</dd>

<dt><dfn method for=SpeechRecognition>start({{MediaStreamTrack}} audioTrack)</dfn> method</dt>
<dd>
1. Let <var>audioTrack</var> be the first argument.
1. If <var>audioTrack</var>'s {{MediaStreamTrack/kind}} attribute is NOT <code>"audio"</code>, throw an {{InvalidStateError}} and abort these steps.
1. If <var>audioTrack</var>'s {{MediaStreamTrack/readyState}} attribute is NOT <code>"live"</code>, throw an {{InvalidStateError}} and abort these steps.
1. Let <var>requestMicrophonePermission</var> be <code>false</code>.
1. Run the <a>start session algorithm</a> with <var>requestMicrophonePermission</var>.
Start the speech recognition process, using a {{MediaStreamTrack}}
When invoked, run the following steps:

1. Let |audioTrack| be the first argument.
1. If |audioTrack|'s {{MediaStreamTrack/kind}} attribute is NOT `"audio"`,
throw an {{InvalidStateError}} and abort these steps.
1. If |audioTrack|'s {{MediaStreamTrack/readyState}} attribute is NOT
`"live"`, throw an {{InvalidStateError}} and abort these steps.
1. Let |requestMicrophonePermission| be `false`.
1. Run the [=start session algorithm=] with |requestMicrophonePermission|.
</dd>

<dt><dfn method for=SpeechRecognition>stop()</dfn> method</dt>
Expand Down

0 comments on commit 682fb24

Please sign in to comment.