Getting live transcripts and conversation intelligence

This tutorial provides step-by-step instructions on how to receive live transcripts and conversation intelligence such as action items, topics, follow-ups, questions, and trackers using the Web SDK.

Prerequisites

Following are the prerequisites for using the Web SDK:

  • App ID and Secret: Ensure that you have your API Credentials which are the App ID and App Secret handy. You can get them from the Symbl Platform. Alternatively, you can use your access token for authentication as well, see the Authentication page to learn more.
  • npm package manager: Install the latest version of the npm package manager (Version 6.0.0 +).

See the list of web browsers supported in the Browsers Supported section.

Install the Web SDK

Using npm

Install the Web SDK using npm with the following command:

npm i  @symblai/symbl-web-sdk

🚧

You must have the latest version of npm package installed. If you don’t have it, run the following commands to get the latest:

npm install

Import and initialize

You can import the Web SDK in via Browser, ES5 and ES6 syntax using the following code:

const {Symbl} = require('@symblai/symbl-web-sdk');
import {Symbl} from '@symblai/symbl-web-sdk';
const {Symbl} = window;

To initialize the Web SDK, you can pass in an access token generated using Symbl’s Authentication mechanism. Alternatively, you can use the App ID and App Secret from the Symbl Platform. Using the App ID and App Secret is not meant for production usage, as those are meant be secret.

const symbl = new Symbl(({
    appId: '<your App ID>',
    appSecret: '<your App Secret>',
    // accessToken: '<your Access Token>', // Can be used instead of appId and appSecret
    // basePath: '<your custom base path>',// optional
    // logLevel: 'debug' // Sets which log level you want to view
    // reconnectOnError: false // If true, will attempt to reconnect if disconnected via error.
});

Create connection and start processing audio

The code below shows the configuration as well as the Streaming API functions that will enable you to start live connection and receive Conversation Intelligence:

// Open a Symbl Streaming API WebSocket Connection.
const connection = await symbl.createConnection();

// Start processing audio from your default input device.
await connection.startProcessing({
  insightTypes: ["question", "action_item", "follow_up"],
  config: {
    encoding: "OPUS" // Encoding can be "LINEAR16" or "OPUS"
  },
  speaker: {
    userId: "[email protected]",
    name: "Your Name Here"
  }
});

For more information on the configuration parameters, see Configuration Reference.

Get live transcripts and conversation intelligence

The connection.on function retrieves live transcripts and conversation intelligence like topics, action items, follow-ups, and trackers.

// Retrieve real-time transcription from the conversation
connection.on("speech_recognition", (speechData) => {
  const { punctuated } = speechData;
  const name = speechData.user ? speechData.user.name : "User";
  console.log(`${name}: `, punctuated.transcript);
});

// Retrieve the topics of the conversation in real-time.
connection.on("topic", (topicData) => {
  topicData.forEach((topic) => {
    console.log("Topic: " + topic.phrases);
  });
});

// Retrive questions from the conversation in real-time.
connection.on("question", (questionData) => {
  console.log("Question Found: ", questionData["payload"]["content"]);
});

You can get the following conversation intelligence in real-time with the Web SDK:

  • Get Transcripts

    You can get live transcripts of the audio using this callback.

  • Get Finalized Transcripts

    You can get the "finalized" transcription data.

  • Get Topics

    Topics provide a quick overview of the key things that were talked about in the conversation.

  • Get Action Items

    An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to take a specific action, e.g. set up a meeting, share a file, complete a task, etc.

  • Get Follow-ups

    This is a category of action items with a connotation to follow-up a request or a task like sending an email or making a phone call or booking an appointment or setting up a meeting.

  • Get Questions

    Any explicit question or request for information that comes up during the conversation.

  • Get Trackers

    Trackers allow you to identify messages that contain specific phrases or sentences.

To learn more about the insight callbacks, see Events and callbacks

Waiting 60 seconds before disconnecting

The Symbl.wait method is a helper method that will wait for the milliseconds you provide before moving on to the next line.

Symbl.wait is only meant for testing and should not be used in a production environment.

// This is just a helper method meant for testing purposes.
// Waits 60 seconds before continuing to the next API call.
await Symbl.wait(60000);

Stop processing and close connection

The code given below stops the audio processing from your device and allows you to close the Streaming API WebSocket connection:

// Stops processing audio, but keeps the WebSocket connection open.
await connection.stopProcessing();

// Closes the WebSocket connection.
connection.disconnect();

Full code sample

To get everything to run you need to stick all this code inside an async method.

🚧

View the Importing section for the various ways to import the Web SDK.

const {Symbl} = window;
// const {Symbl} = require('@symblai/symbl-web-sdk'); // ES5
// import {Symbl} from '@symblai/symbl-web-sdk'; // ES6

(async () => {

  try {

      // We recommend to remove appId and appSecret authentication for production applications.
      // See authentication section for more details
      const symbl = new Symbl({
          appId: '<your App ID>',
          appSecret: '<your App Secret>',
          // accessToken: '<your Access Token>'
      });
      
      // Open a Symbl Streaming API WebSocket Connection.
      const connection = await symbl.createConnection();
      
      // Start processing audio from your default input device.
      await connection.startProcessing({
        insightTypes: ["question", "action_item", "follow_up"],
        config: {
          encoding: "OPUS" // Encoding can be "LINEAR16" or "OPUS"
        },
        speaker: {
          userId: "[email protected]",
          name: "Your Name Here"
        }
      });

      // Retrieve real-time transcription from the conversation
      connection.on("speech_recognition", (speechData) => {
        const { punctuated } = speechData;
        const name = speechData.user ? speechData.user.name : "User";
        console.log(`${name}: `, punctuated.transcript);
      });

      // Retrieve the topics of the conversation in real-time.
      connection.on("topic", (topicData) => {
        topicData.forEach((topic) => {
          console.log("Topic: " + topic.phrases);
        });
      });

      // Retrive questions from the conversation in real-time.
      connection.on("question", (questionData) => {
        console.log("Question Found: ", questionData["payload"]["content"]);
      });
      
      // This is just a helper method meant for testing purposes.
      // Waits 60 seconds before continuing to the next API call.
      await Symbl.wait(60000);
      
      // Stops processing audio, but keeps the WebSocket connection open.
      await connection.stopProcessing();
      
      // Closes the WebSocket connection.
      connection.disconnect();
  } catch(e) {
      // Handle errors here.
  }

})();

For more information, read the Request Parameters section of the Streaming API.