Streaming API

Get started with the Streaming API

The Symbl.ai Streaming API uses the WebSocket protocol to process audio and provide conversation intelligence in real time. Implement the Streaming API in your web app to provide your users with active support from Symbl.ai, including captions, sentiment analysis, and insights like discussed topics and questions, among other features.

Symbl.ai SDKs

The code samples provided for the Streaming API use the Symbl.ai JavaScript and Web SDKs. For details about the SDKs and additional code samples, see:

Streaming API reference

For example code and more details about the streaming API, see the Streaming API reference.

Real-time interim results

Interim results are intermediate transcript predictions retrieved during a real-time conversation that are likely to change before the automatic speech recognition (ASR) engine returns its final results.

During a streaming conversation, message: recognition_result continuously adds recognized text and data to the transcript.

You can also use the conversation ID to view real-time interim results of the transcript. During a conversation, use the Get messages operation to view interim transcript results. Use the same operation to view the final transcript after a conversation.

Conversation intelligence

Once you have a conversation ID, you can view speaker analytics using Get analytics. You can call Get analytics during the conversation or afterward.

After the conversation is complete, use the conversation ID to view insights described in Automatic Speech Recognition.

Authentication

The following code samples ask you to include your App ID and App Secret, the credentials used to generate Symbl.ai API access tokens, for testing purposes only. For a production environment, Symbl.ai recommends that you do not store your Symbl.ai API credentials in plain text or in your source code.

For more information about your App ID and App Secret, see the Get Started Guide.


Stream with the Web SDK

This section describes how to use the Web SDK to open a connection, process an audio stream, and close a connection.

Before you begin

Before you try the code samples, ensure your developer environment meets the following requirements.

  1. Create a new directory for your Web SDK project.

  2. In the new directory, create a file named index.html.

Try the Streaming API

To try the code sample, paste the following code into your index.html file and replace the placeholder values.

This code sample authorizes you with the Streaming API and starts capturing audio from your mic. The sample also returns a conversation ID, which can be used to generate conversation intelligence.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <title>Symbl Web SDK example</title>
  <script src="https://sdk.symbl.ai/js/beta/symbl-web-sdk/latest/symbl.min.js"></script>
  <script>
      // START changing values here.
      
      const appId = '<APP_ID>';
      const appSecret = '<APP_SECRET>';

      // STOP changing values here.

      const start = async () => {

      try {

          // Symbl recommends replacing the App ID and App Secret with an Access Token for authentication in production applications.
          // For more information about authentication see https://docs.symbl.ai/docs/developer-tools/authentication/.
          const symbl = new Symbl({
              appId: appId,
              appSecret: appSecret,
              // accessToken: '<your Access Token>' // for production use
          });
          
          // Open a Streaming API WebSocket Connection and start processing audio from your input device.
          const connection = await symbl.createAndStartNewConnection();

          // Retrieve the conversation ID for the conversation.
          connection.on("conversation_created", (conversationData) => {
            const conversationId = conversationData.data.conversationId;
            console.log(`${conversationId}`);
            document.querySelector("#conversationId").innerHTML = `${conversationId}`;
            document.querySelector("#startButton").setAttribute("disabled", "");
          });

          // Retrieve real-time transcription from the conversation.
          connection.on("speech_recognition", (speechData) => {
            const name = speechData.user ? speechData.user.name : "User";
            const transcript = speechData.punctuated.transcript;
            console.log(`${name}: `, transcript);
            document.querySelector("#speechRecognition").innerHTML = `${name}: ${transcript}`;
          });
          
          // This is a helper method for testing purposes.
          // It waits 60 seconds before continuing to the next API call.
          await Symbl.wait(60000);
          
          // Stops processing audio, but keeps the WebSocket connection open.
          await connection.stopProcessing();
          
          // Closes the WebSocket connection.
          connection.disconnect();

          document.querySelector("#startButton").removeAttribute("disabled");
      } catch(e) {
          // Handle errors here.
      }
    }
  </script>
</head>

<body>

  <button id="startButton" onclick="javascript: start()">Start Processing</button>

  <p><b>Conversation ID:</b> <span id="conversationId">None</span></p>

  <p id="speechRecognition">Click <b>Start Processing</b> and begin speaking to see transcription. If prompted, allow access to your microphone. The page records for 60 seconds and then closes the connection.<br> <br> If nothing happens, check your <a href="https://platform.symbl.ai/#/home">Symbl.ai App ID and App Secret</a> in this HTML file on lines 10 and 11 respectively.</p>

</body>

</html>

Where:

To test the sample, open index.html in your web browser.


Stream with the JavaScript SDK

This section describes how to use the JavaScript SDK to open a connection, process an audio stream, and close a connection. The JavaScript SDK is designed to be used with Node.js.

Before you begin

Before you try the code samples, ensure your developer environment meets the following requirements.

  1. Install Node.js

  2. To capture audio input from your microphone, install SoX (Windows/Mac) or ALSA Tools (Linux).

    • Download and install SoX. (Windows/Mac)

    • To install ALSA Tools on Debian or Ubuntu:

      sudo apt-get update && sudo apt-get install alsa-base alsa-utils
      
    • To install ALSA Tools on Arch:

      sudo pacman -Syu && sudo pacman -S alsa-tools
      
  3. Create a new directory, such as streaming-test or any name you choose.

  4. In the new directory, create a file named stream.js to be completed later in this guide.

  5. Also create a file named subscribe.js to be completed later in this guide.

  6. Initialize a Node.js project:

    npm init -y
    

    You should receive a success message is similar to:

    {
    "name": "streaming-test",
    "version": "1.0.0",
    "description": "",
    "main": "stream.js",
    "scripts": {
     "test": "echo \"Error: no test specified\" && exit 1"
    },
    "keywords": [],
    "author": "",
    "license": "ISC"
    }
    
  7. Install the mic package:

    npm install mic
    
  8. Install the Symbl.ai JavaScript SDK:

    npm install @symblai/symbl-js
    

Try the Streaming API

To try the code sample, paste the following code into your stream.js file and replace the placeholder values.

This code sample authorizes you with the Streaming API, starts capturing audio from your mic, and provides conversation intelligence insights based on what you speak. The sample also returns a conversation ID, which can be used to generate conversation intelligence.

const { sdk } = require('@symblai/symbl-js');
const uuid = require('uuid').v4;

const mic = require('mic')
const sampleRateHertz = 16000

// START changing values here.

const appId = '<APP_ID>';
const appSecret = '<APP_SECRET>';

const config = {
  meetingTitle: '<TITLE>',
  sampleRateHertz: sampleRateHertz
}

const speaker = {
  name: '<NAME>',
  userId: '<IDENTIFIER>',
  email: '<EMAIL>'
}

// STOP changing values here.

const insightTypes = [
  'question',
  'action_item',
  'follow_up',
  'topic'
]

const micInstance = mic({
  rate: sampleRateHertz,
  channels: '1',
  debug: false,
  exitOnSilence: 6,
});

// Need unique ID and best to use uuid in production
// const connectionId = uuid()
const connectionId = Buffer.from(appId).toString('base64'); // for testing

(async () => {
  try {
    // Initialize the SDK
    await sdk.init({
      appId: appId,
      appSecret: appSecret,
      basePath: 'https://api.symbl.ai',
    })

    // Start Real-time Request (Uses Real-time WebSocket API behind the scenes)
    const connection = await sdk.startRealtimeRequest({
      id: connectionId,
      speaker: speaker,
      insightTypes: insightTypes,
      config: config,
      handlers: {
        /**
         * This will return live speech-to-text transcription of the call.
         * There are other handlers that can be seen in the full example.
         */
        onSpeechDetected: (data) => {
          if (data) {
            const {
              punctuated
            } = data
            console.log('Live: ', punctuated && punctuated.transcript)
          }
        }
      }
    });

    // Logs conversationId which is used to access the conversation afterwards
    console.log('Successfully connected. Conversation ID: ', connection.conversationId);

    const micInputStream = micInstance.getAudioStream()
    /** Raw audio stream */
    micInputStream.on('data', (data) => {
      // Push audio from Microphone to websocket connection
      connection.sendAudio(data)
    })

    micInputStream.on('error', function (err) {
      console.log('Error in Input Stream: ' + err)
    })

    micInputStream.on('startComplete', function () {
      console.log('Started listening to Microphone.')
    })

    micInputStream.on('silence', function () {
      console.log('Got SIGNAL silence')
    })

    micInstance.start()

    setTimeout(async () => {
      // Stop listening to microphone
      micInstance.stop()
      console.log('Stopped listening to Microphone.')
      try {
        // Stop connection
        await connection.stop()
        console.log('Connection Stopped.')
      } catch (e) {
        console.error('Error while stopping the connection.', e)
      }
    }, 120 * 1000) // Stop connection after 2 minute i.e. 120 secs
  } catch (err) {
    console.error('Error: ', err)
  }
})();

Where:

  • <APP_ID> and <APP_SECRET> are your App ID and Secret from the Symbl.ai Platform.
  • <TITLE> is the name of the meeting or conversation. If no name is provided, the Streaming API sets the name to the conversation ID.
  • <NAME> is the name of the person speaking.
  • <EMAIL> is the email address of the person speaking. If provided, when the audio stream ends, the Streaming API sends an email with conversation intelligence to the given address.

To run the code sample:

  1. On the command line, go to your Node.js project directory.

  2. Run the code sample.

    node stream.js
    

    The first time you run the code sample, you might be prompted to allow access to your microphone. To try the sample, permit access to your device's mic.

Try subscribing to an audio stream

To open a read-only connection to an existing audio stream without initializing a mic or audio device, use the Subscribe feature of the Streaming API. To subscribe to an existing audio stream, you pass your code the connectionId for the target stream. The sample code returns a conversationId which can be used to generate conversation intelligence.

To try the code sample, paste the following code into your subscribe.js file and replace the placeholder values.

This code sample authorizes you with the Streaming API and connects to a running instance of the previous code sample.

const { sdk } = require('@symblai/symbl-js');

// START changing values here.

const appId = '<APP_ID>';
const appSecret = '<APP_SECRET>';

// STOP changing values here.

// Subscribe to connection using connectionId that was defined as `connectionId` in previous example.
// We'll use the same constant Base64 string as before
const connectionId = Buffer.from(appId).toString('base64'); // for testing

(async () => {
  try {
    // Initialize the SDK
    await sdk.init({
      appId: appId,
      appSecret: appSecret,
      basePath: 'https://api.symbl.ai',
    })

    sdk.subscribeToStream(connectionId, (data) => {
        const { type } = data;
        if (type === 'message_response') {

            const { messages } = data;

            // You get any messages here
            messages.forEach(message => {
              console.log(`Message: ${message.payload.content}`)
            });

        } else if (type === 'insight_response') {

            const { insights } = data;

            // You get any insights here
            insights.forEach(insight => {
                console.log(`Insight: ${insight.type} - ${insight.text}`);
            });

        } else if (type === 'topic_response') {
            const { topics } = data;
            
            // You get any topic phrases here
            topics.forEach(topic => {
                console.log(`Topic detected: ${topic.phrases}`)
            });

        } else if (type === 'message' && data.message.hasOwnProperty('punctuated')) {

            const { transcript } = data.message.punctuated;

            // Live punctuated full transcript as opposed to broken into messages
            console.log(`Live transcript: ${transcript}`)
        } else if (type === 'message' && data.message.type === 'subscription_started') {
                console.log("Subscription started event received:", data);
          			//You get the conversationId here
                const conversationId = data.message.data.conversationId;
                console.log(`Subscription started for Conversation ID: ${conversationId}`);
        }

        // The raw data response
        console.log(`Response type: ${data.type}. Object: `, data);

    });
  } catch (err) {
    console.error('Error: ', err)
  }
})();

Where:

To run the code sample:

  1. If it is not already running, run the previous code sample.

  2. On a new instance of the command line, go to your Node.js project directory.

  3. Run the code sample.

node subscribe.js