Receive Live Captioning
This example goes over how you can use the Symbl Streaming API to do live captioning. This example uses both the Symbl's Javascript SDK, which is meant to be run using Node.js, and native Javascript which can be run in the browser:
#
ConnectThe first thing we do is connect to the Web Socket using the SDK. If you're using the SDK you can use the onMessageResponse
and onSpeechDetected
handlers after the connection is established, otherwise you'll have to parse the response in the onmessage
callback for the WebSocket.
- Symbl SDK (Node.js)
- Native Javascript
#
TestingCreate a javascript file named app.js
and copy this code into the file. Fill in the placeholder values with the proper values. Use npm to install the required libraries: npm install symbl-node uuid
. Now in the terminal run
If successful you should receive a response in the console.
#
TestingOpen up your browser's deveolopment environment and copy this code into the console. Replace the placeholder values with the proper values.
If successful you should receive a response in the console.
#
Handlers Referencehandlers
: This object has the callback functions for different eventsonSpeechDetected
: To retrieve the real-time transcription results as soon as they are detected. You can use this callback to render live transcription which is specific to the speaker of this audio stream.
#
onSpeechDetected JSON Response ExampleonMessageResponse
: This callback function contains the "finalized" transcription data for this speaker and if used with multiple streams with other speakers this callback would also provide their messages. The "finalized" messages mean that the automatic speech recognition has finalized the state of this part of transcription and has declared it "final". Therefore, this transcription will be more accurate thanonSpeechDetected
.
#
onMessageResponse JSON Response Example