Live sentiment analysis

This guide describes how to get started with the Symbl.ai Streaming API for conversation analysis. The Streaming API enables real-time conversational analysis for voice, video, chat, or any live streaming directly through your web browser.

If you have voice, video, or chat enabled, you can use the Streaming API to tap the raw conversational data of those streams. This guide also describes how to set up a function that logs sentiment analysis in real-time. You get sentiment analysis using the Get message operation with query parameter "sentiment": true.

📘

Note

The code sample runs entirely in the browser without Node.js, but requires you to understand HTTP requests.

Getting Started

First, create the endpoint for the WebSocket connection. The WebSocket endpoint has two parts:

  1. A unique connection ID. This example uses a universally unique identifier (UUID) to automatically generate a unique connection ID. This ensures that your connectionId does not conflict with any other client connecting to the same namespace.
  2. A GET parameter named access_token. This is the access token generated during Authentication.

Check the example below:

const accessToken = accessToken;
// Refer to the Authentication section for how to generate the accessToken: https://docs.symbl.ai/docs/authenticate
const uuid = require('uuid').v4;
const connectionId = uuid();
const symblEndpoint = `wss://api.symbl.ai/v1/streaming/${connectionId}?access_token=${accessToken}`;
const ws = new WebSocket(symblEndpoint);

// Have audio context instance created for getting sample rate and audio processing handler.
const context = new AudioContext();

Endpoint

The streaming endpoint is wss://api.symbl.ai/v1/streaming/. If you have previously used any other endpoint, make sure you use this endpoint now.

Create the WebSocket

Now that you have constructed the endpoint, let's create a new WebSocket.

📘

You can use the open source JavaScript API for WebSockets. For more information, see see: https://developer.mozilla.org/en-US/docs/Web/API/WebSocket

const ws = new WebSocket(symblEndpoint);

Make sure you subscribe to the WebSocket's event listeners, so you don’t miss any messages. Use the following example code to subscribe to event listeners before you connect the WebSocket to the endpoint.

Set WebSocket listeners

// Fired when a message is received from the WebSocket server
ws.onmessage = (event) => {
  // You can find the conversationId in event.message.data.conversationId;
  const data = JSON.parse(event.data);
  if (data.type === 'message' && data.message.hasOwnProperty('data')) {
    console.log('conversationId', data.message.data.conversationId);
  }
  if (data.type === 'message_response') {
    for (let message of data.messages) {
      console.log('Transcript (more accurate): ', message.payload.content);
    }
  }
  if (data.type === 'topic_response') {
    for (let topic of data.topics) {
      console.log('Topic detected: ', topic.phrases)
    }
  }
  if (data.type === 'insight_response') {
    for (let insight of data.insights) {
      console.log('Insight detected: ', insight.payload.content);
    }
  }
  if (data.type === 'message' && data.message.hasOwnProperty('punctuated')) {
    console.log('Live transcript (less accurate): ', data.message.punctuated.transcript)
  }
  console.log(`Response type: ${data.type}. Object: `, data);
};

// Fired when the WebSocket closes unexpectedly due to an error or lost connetion
ws.onerror  = (err) => {
  console.error(err);
};

// Fired when the WebSocket connection has been closed
ws.onclose = (event) => {
  console.info('Connection to websocket closed');
};

Start the WebSocket connection

Once you have opened the connection, send this message to the WebSocket to start the connection to the Streaming API.

Include the "sentiment": true flag to generate sentiment analysis.

// Fired when the connection succeeds.
ws.onopen = (event) => {
  ws.send(JSON.stringify({
    "type":"start_request",
      "meetingTitle":"Websockets How-to", // Conversation name
      "insightTypes":["question","action_item"],  // Enable insights
      "trackers":[
          {
              "name":"content",
              "vocabulary": [
                  "finding talent",
                  "life",
                  "purpose",
                  "how we will"
              ]
          }
      ],
      "config":{
          "sentiment": true,
          "confidenceThreshold":0.5,
          "languageCode":"en-US",
          "speechRecognition":{
              "encoding":"LINEAR16",
              "sampleRateHertz":44100
          }
      },
      "speaker":{
          "userId":"[email protected]",
          "name":"Example Sample"
      }
  }));
};

Audio encoding options

Check out our guide on the Best Practices for Audio Integrations with Symbl.ai to learn more about audio encoding options.

Create the Audio Stream

Once you have the connection to the Streaming API set up, you need to create an audio stream.
You can do this using the Navigator API by accessing mediaDevices and calling getUserMedia.

This enables you to grant the browser access to your computer's microphone.

const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false });

Because this examples processes audio data, you don’t need to request video device access.

Handle the audio stream

Now that you have granted access to the microphone, you can use the WebSocket to handle the data stream. Now you can generate transcripts and analyze insights in real-time.

To handle the audio stream, you need to create a new AudioContext. Then use the microphone stream you retrieved from the Promise resolution above to create a new source and processor.

/**
 * The callback function which fires after a user gives the browser permission to use
 * the computer's microphone. Starts a recording session which sends the audio stream to
 * the WebSocket endpoint for processing.
 */
const handleSuccess = (stream) => {
  const AudioContext = window.AudioContext;
  const context = new AudioContext();
  const source = context.createMediaStreamSource(stream);
  const processor = context.createScriptProcessor(1024, 1, 1);
  const gainNode = context.createGain();
  source.connect(gainNode);
  gainNode.connect(processor);
  processor.connect(context.destination);
  processor.onaudioprocess = (e) => {
    // convert to 16-bit payload
    const inputData = e.inputBuffer.getChannelData(0) || new Float32Array(this.bufferSize);
    const targetBuffer = new Int16Array(inputData.length);
    for (let index = inputData.length; index > 0; index--) {
        targetBuffer[index] = 32767 * Math.min(1, inputData[index]);
    }
    // Send audio stream to websocket.
    if (ws.readyState === WebSocket.OPEN) {
      ws.send(targetBuffer.buffer);
    }
  };
};


handleSuccess(stream);

Stopping the WebSocket Connection

To stop the WebSocket connection when you're done, run this code in your web browser:

// Stops the WebSocket connection.
ws.send(JSON.stringify({
  "type": "stop_request"
}));

Test

To make sure the code is working, open your browser's development environment and copy the code directly into the console. You'll see the popup for microphone permissions. If you grant permission, the application starts recording. Speak into the microphone to see the results being logged to the console.

Get the Conversation ID

You need the Conversation ID to generate Conversation Intelligence using the Conversations API. This guide is limited to streaming examples. Use the Conversations API to generate conversation intelligence after the conversation.

Review the onmessage handler to see how you can get the Conversation ID:

// Fired when a message is received from the WebSocket server
ws.onmessage = (event) => {
  // You can find the conversationId in event.message.data.conversationId;
  const data = JSON.parse(event.data);
  if (data.type === 'message' && data.message.hasOwnProperty('data')) {
    console.log('conversationId', data.message.data.conversationId);
  }
  if (data.type === 'message_response') {
    for (let message of data.messages) {
      console.log('Transcript (more accurate): ', message.payload.content);
    }
  }
  if (data.type === 'topic_response') {
    for (let topic of data.topics) {
      console.log('Topic detected: ', topic.phrases)
    }
  }
  if (data.type === 'insight_response') {
    for (let insight of data.insights) {
      console.log('Insight detected: ', insight.payload.content);
    }
  }
  if (data.type === 'message' && data.message.hasOwnProperty('punctuated')) {
    console.log('Live transcript (less accurate): ', data.message.punctuated.transcript)
  }
  console.log(`Response type: ${data.type}. Object: `, data);
};

With the Conversation ID, you can generate conversation intelligence, including:

  • Trackers -- Use trackers to automatically recognize phrases and their meaning in conversations. An individual tracker is a group of phrases identifying a characteristic or an event you want to track in conversations.
  • Conversation topics -- Summary topics provide a quick overview of the key things that were talked about in the conversation.
  • Action items -- An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to take a specific action, such as set up a meeting, share a file, complete a task.
  • Follow-ups -- A request or a task like sending an email or making a phone call or booking an appointment or setting up a meeting.

You can also log sentiments in real-time, as described in the next section.

Logging Sentiments

The first step in creating a function for logging sentiments is to make sure that you log the conversationId. Earlier, you logged the conversationId through: console.log('conversationId', data.message.data.conversationId).

The ws.onmessage method logs events, and the events contain data. By parsing of the event message data, you can access the conversationId : data.message.data.conversationId

After parsing the event data, the next step is to cache the conversationId, you save the conversationId as a constant in JavaScript.

const conversationId = data.message.data.conversationId;

With the conversationId cached, you can set up a POST request to the Conversations API Message operation with the sentiment=true query parameter together with the cached conversationId.

To create the POST request,

  1. Use an HTTP library. The library used here is XMLHttpRequest() but you are free to use any library you like.
  2. Set up the request, indicate the response type, the POST request type, and configure the headers.
  3. Create a log.
  4. Configure the request to make a call to .send().

Here is the full sample of the function for logging sentiments in real-time over the WebSocket.

// Fired when a message is received from the WebSocket server
ws.onmessage = (event) => {// You can find the conversationId in event.message.data.conversationId;
const data = JSON.parse(event.data);
if (data.type === 'message' && data.message.hasOwnProperty('data')) {
    console.log('conversationId', data.message.data.conversationId);
    const conversationId = data.message.data.conversationId;console.log('onmessage event', event);

    // You can log sentiments on messages from data.message.data.conversationId 
    const request = new XMLHttpRequest();
    request.responseType = "text";
    const sentimentEndpoint = `https://api.symbl.ai/v1/conversations/${conversationId}/messages?sentiment=true`;
    request.open("GET", sentimentEndpoint)
    request.setRequestHeader('Authorization', `Bearer ${accessToken}`);
    request.setRequestHeader('Content-Type', 'application/json');
    request.onreadystatechange=(e)=> {console.log(request.responseText)}
    request.send()
    }

};

Full code example

Here's the complete code example, which you can also view on GitHub.

This example is designed to run in the browser.

const accessToken = accessToken;
// Refer to the Authentication section for how to generate the accessToken: https://docs.symbl.ai/docs/authenticate
const uuid = require('uuid').v4;
const connectionId = uuid();
const symblEndpoint = `wss://api.symbl.ai/v1/streaming/${connectionId}?access_token=${accessToken}`;
const ws = new WebSocket(symblEndpoint);

// Fired when a message is received from the WebSocket server
ws.onmessage = (event) => {
// You can parse the JSON data
const data = JSON.parse(event.data);

// You can find the conversationId in event.message.data.conversationId;
if (data.type === 'message' && data.message.hasOwnProperty('data')) {
    console.log('conversationId', data.message.data.conversationId);
    const conversationId = data.message.data.conversationId;console.log('onmessage event', event);

    // You can log sentiments on messages from data.message.data.conversationId 
    const request = new XMLHttpRequest();
    request.responseType = "text";
    const sentimentEndpoint = `https://api.symbl.ai/v1/conversations/${conversationId}/messages?sentiment=true`;
    request.open("GET", sentimentEndpoint)
    request.setRequestHeader('Authorization', `Bearer ${accessToken}`);
    request.setRequestHeader('Content-Type', 'application/json');
    request.onreadystatechange=(e)=> {console.log(request.responseText)}
    request.send()
    }

  if (data.type === 'message_response') {
    for (let message of data.messages) {
      console.log('Transcript (more accurate): ', message.payload.content);
    }
  }
  if (data.type === 'topic_response') {
    for (let topic of data.topics) {
      console.log('Topic detected: ', topic.phrases)
    }
  }
  if (data.type === 'insight_response') {
    for (let insight of data.insights) {
      console.log('Insight detected: ', insight.payload.content);
    }
  }
  if (data.type === 'message' && data.message.hasOwnProperty('punctuated')) {
    console.log('Live transcript (less accurate): ', data.message.punctuated.transcript)
  }
  console.log(`Response type: ${data.type}. Object: `, data);
};

// Fired when the WebSocket closes unexpectedly due to an error or lost connetion
ws.onerror  = (err) => {
  console.error(err);
};

// Fired when the WebSocket connection has been closed
ws.onclose = (event) => {
  console.info('Connection to websocket closed');
};

// Fired when the connection succeeds.
ws.onopen = (event) => {
  ws.send(JSON.stringify({
    "type":"start_request",
      "meetingTitle":"Websockets How-to", // Conversation name
      "insightTypes":["question","action_item"],  // Enable insights
      "trackers":[
          {
              "name":"content",
              "vocabulary": [
                  "finding talent",
                  "life",
                  "purpose",
                  "how we will"
              ]
          }
      ],
      "config":{
          "sentiment": true,
          "confidenceThreshold":0.5,
          "languageCode":"en-US",
          "speechRecognition":{
              "encoding":"LINEAR16",
              "sampleRateHertz":44100
          }
      },
      "speaker":{
          "userId":"[email protected]",
          "name":"Example Sample"
      }
  }));
};

const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: false });

/**
 * The callback function which fires after a user gives the browser permission to use
 * the computer's microphone. Starts a recording session which sends the audio stream to
 * the WebSocket endpoint for processing.
 */
const handleSuccess = (stream) => {
  const AudioContext = window.AudioContext;
  const context = new AudioContext();
  const source = context.createMediaStreamSource(stream);
  const processor = context.createScriptProcessor(1024, 1, 1);
  const gainNode = context.createGain();
  source.connect(gainNode);
  gainNode.connect(processor);
  processor.connect(context.destination);
  processor.onaudioprocess = (e) => {
    // convert to 16-bit payload
    const inputData = e.inputBuffer.getChannelData(0) || new Float32Array(this.bufferSize);
    const targetBuffer = new Int16Array(inputData.length);
    for (let index = inputData.length; index > 0; index--) {
        targetBuffer[index] = 32767 * Math.min(1, inputData[index]);
    }
    // Send audio stream to websocket.
    if (ws.readyState === WebSocket.OPEN) {
      ws.send(targetBuffer.buffer);
    }
  };
};


handleSuccess(stream);

Example response

This is a short example of a conversation transcript with sentiment analysis.

{
  "type": "message_response",
  "messages": [
    {
      "from": {
        "id": "b421075a-992f-41d6-bd55-edeb8aad5c22",
        "name": "Tony Stark",
        "userId": "[email protected]"
      },
      "payload": {
        "content": "Let us see if the sentiments are working correctly or not.",
        "contentType": "text/plain"
      },
      "id": "f34f7b34-68ac-43ef-b2ab-42d0c5e4c44c",
      "channel": {
        "id": "realtime-api"
      },
      "metadata": {
        "disablePunctuation": true,
        "timezoneOffset": 480,
        "originalContent": "Let us see if the sentiments are working correctly or not.",
        "words": "[{\"word\":\"Let\",\"startTime\":\"2022-12-15T09:42:13.742Z\",\"endTime\":\"2022-12-15T09:42:14.042Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"us\",\"startTime\":\"2022-12-15T09:42:14.042Z\",\"endTime\":\"2022-12-15T09:42:14.042Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"see\",\"startTime\":\"2022-12-15T09:42:14.042Z\",\"endTime\":\"2022-12-15T09:42:14.142Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"if\",\"startTime\":\"2022-12-15T09:42:14.142Z\",\"endTime\":\"2022-12-15T09:42:14.242Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"the\",\"startTime\":\"2022-12-15T09:42:14.242Z\",\"endTime\":\"2022-12-15T09:42:14.341Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"sentiments\",\"startTime\":\"2022-12-15T09:42:14.341Z\",\"endTime\":\"2022-12-15T09:42:14.842Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"are\",\"startTime\":\"2022-12-15T09:42:14.842Z\",\"endTime\":\"2022-12-15T09:42:14.942Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"working\",\"startTime\":\"2022-12-15T09:42:14.942Z\",\"endTime\":\"2022-12-15T09:42:15.142Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"correctly\",\"startTime\":\"2022-12-15T09:42:15.142Z\",\"endTime\":\"2022-12-15T09:42:15.542Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"or\",\"startTime\":\"2022-12-15T09:42:15.542Z\",\"endTime\":\"2022-12-15T09:42:15.642Z\",\"timeOffset\":null,\"duration\":null},{\"word\":\"not.\",\"startTime\":\"2022-12-15T09:42:15.642Z\",\"endTime\":\"2022-12-15T09:42:15.942Z\",\"timeOffset\":null,\"duration\":null}]",
        "originalMessageId": "4b2b861c-b7d9-4826-9502-2162906d0e7b"
      },
      "dismissed": false,
      "duration": {
        "startTime": "2022-12-15T09:42:13.742Z",
        "endTime": "2022-12-15T09:42:15.942Z"
      },
      "sentiment": {
        "polarity": {
          "score": -0.3
        },
        "suggested": "neutral"
      }
    }
  ],
  "sequenceNumber": 0
}