How to get real time transcription and insights on Streaming API
In this how to, you will learn how to get started with Symbl’s most accurate conversation analysis API, native Streaming API.
Create the WebSocket endpoint by tagging on a unique meeting identifier and a query param, accessToken to the url:
Now that we have constructed the endpoint, let's create a new WebSocket!
Before we connect the WebSocket to the endpoint we first want to subscribe to its event listeners so we don’t miss any messages.
Now we are ready to start the WebSocket connection.
To pass the audio stream to the Streaming API, we first need to grant access to peripherals.
We can do this with the Navigator
API by accessing mediaDevices
and calling getUserMedia
.
Now that we have granted access to the microphone we can actually use the
WebSocket to handle the the WebSocket the data streamed to Symbl’s cloud
so transcripts and insights can be analyzed in real-time. We can do this by
creating a new AudioContext
and using the microphones stream we retrieved from the Promise resolution above to create a new source, and processor.