NAV Navbar
JavaScript cURL

Introduction

Base URL

https://api.symbl.ai/v1

Symbl is a comprehensive suite of APIs for analyzing natural human conversations - both for your team’s internal conversations and of course the conversations you are having with your customers. Built on our Contextual Conversation Intelligence (C2I) technology, the APIs enable you to rapidly incorporate human-level understanding that goes beyond simple natural language processing of voice and text conversations.

Get real-time analysis of free-flowing discussions like meetings, sales calls, support conversations etc to automatically surface highly relevant summary topics, contextual insights, suggestive action items, follow-ups, decisions and questions.

Take a look at a sample output generated from our APIs

Getting Started

Postman

Dial into your phone or an existing meeting and begin speaking. After you finish speaking, the end connection endpoint will return a summary link that shows all the generated insights

The Symbl API Collection has all the apis that we offer so you can quickly execute the requests to see the kind of output that is generated in the response. The 3 main requests you should execute are:

  1. Generate Token - run this request first to generate your API token. Your token is required to run any other request in the collection and so this should be the first endpoint you hit with your app id and app secret which you can get from our platform

  2. Start Connection - run this request second to start a connection with Symbl. In the response, a conversationId is generated. Use this id to get insights out of this meeting using our other conversation APIs.

  3. Stop Connection - run this request last to end the call. In the response, a summaryUrl is generated which is a link to a web page that contains the transcript from the conversation, topics, action items, questions and follow ups.

Run in Postman

Ready to test? When you've built out your integration, review our testing tips to ensure that your integration works end to end

Explore Sample Apps

Use any of the sample integrations as a starting point to extend your own custom applications

If you're interested in what all you can do with Symbl, check out our sample applications on GitHub that demonstrate how Symbl can be used to connect to Twilio Media Streams, Salesforce Dashboard, Outlook Calendar and many more.


Explore sample apps

Browse our demo library and look at sample code on how to integrate voice intelligence into existing applications.

...

Integrate

You've got an idea, let's start building

You can integrate Symbl in one of three ways:

...
Voice SDK

Install Symbl's Voice SDK into your own application to set up a PSTN or SIP connection, push speaker events and generate insights from your conversations

Start
...
Voice API

Integrate with our restful Voice APIs - either telephony or realtime websockets - to detect actionable insights in conversations

Start
...
Conversation API

A suite of requests that allow you to pick apart the insights generated from your conversations

Start


Voice SDK

The Programmable Voice SDKs allow you to add Conversational Intelligence directly into your web applications and meeting platforms. With the Voice SDK, you can generate intelligent insights such as action items, topics and questions.

This full guide demonstrates how to use the SDK to add voice integration to your existing application.


Authentication

In order to use the Voice SDK, you need a valid appId and appSecret. If you haven't already, log in to platform to get your credentials.

Initialize the SDK

After installing, initialize the SDK

 sdk.init({
    appId: 'yourAppId',
    appSecret: 'yourAppSecret',
    basePath: 'https://api.symbl.ai'
  })
  .then(() => console.log('SDK Initialized.'))
  .catch(err => console.error('Error in initialization.', err));

1. After getting your appId and appSecret, use the command below to install the SDK and add it to your npm project's package.json.

$ npm install --save symbl-node


2. Reference the SDK in either the ES5 or ES6 way.

ES5

var sdk = require('symbl-node').sdk;


OR

ES6

import { sdk } from 'symbl-node';

Connect to Endpoints

The code snippet below dials in using PSTN and hangs up after 60 seconds.

const {
  sdk
} = require('symbl-node');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.symbl.ai'
}).then(() => {
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn', // This can be pstn or sip
      phoneNumber: '<number_to_call>', // include country code 
      dtmf: '<meeting_id>' // if password protected, use "dtmf": "<meeting_id>#,#<password>#"
    }
  }).then(connection => {
    console.log('Successfully connected.', connection.connectionId);

    // Scheduling stop endpoint call after 60 seconds for demonstration purposes
    // In real adoption, sdk.stopEndpoint() should be called when the meeting or call actually ends

    setTimeout(() => {
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
        console.log('Summary Info:', connection.summaryInfo);
        console.log('Conversation ID:', connection.conversationId);
      }).catch(err => console.error('Error while stopping the connection', err));
    }, 60000);
  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

We recommend using SIP whenever possible instead of PSTN as it provides higher audio quality options as compared to PSTN. SIP endpoint provides an optional audio configuration as well. Contact us for your specific requirements.

This SDK supports dialing through a simple phone number - PSTN or a Voice Over IP system - SIP endpoint. If you don't have your own voice over IP system, you will want to use a phone number to make the connection.

PSTN (Public Switched Telephone Networks)

The Publicly Switched Telephone Network (PSTN) is the network that carries your calls when you dial in from a landline or cell phone. It refers to the worldwide network of voice-carrying telephone infrastructure, including privately-owned and government-owned infrastructure.

endpoint: {
  type: 'pstn',
  phoneNumber: '14083380682', // Phone number to dial in
  dtmf: '6155774313#' // Joining code
}

SIP (Session Initiation Protocol)

Session Initiation Protocol (SIP) is a standardized communications protocol that has been widely adopted for managing multimedia communication sessions for voice and video calls. SIP may be used to establish connectivity between your communications infrastructures and Symbl's communications platform.

endpoint: {
  type: 'sip',
  uri: 'sip:555@your_sip_domain', // SIP URI to dial in
  audioConfig: { // Optionally any audio configuration
    sampleRate: 16000,
    encoding: 'PCMU',
    sampleSize: '16'
  }
}

Push Events

The example below shows how to connect to a PSTN endpoint, create a speakerEvent instance and push events on connection

const {
  sdk,
  SpeakerEvent
} = require('symbl-node');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.symbl.ai'
}).then(() => {
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn',
      phoneNumber: '<number_to_call>', // include country code 
      dtmf: '<meeting_id>' // if password protected, use "dtmf": "<meeting_id>#,#<password>#"
    }
  }).then(connection => {
    const connectionId = connection.connectionId;
    console.log('Successfully connected.', connectionId);

    const speakerEvent = new SpeakerEvent();
    speakerEvent.type = SpeakerEvent.types.startedSpeaking;
    speakerEvent.user = {
      userId: 'john@example.com',
      name: 'John'
    };
    speakerEvent.timestamp = new Date().toISOString();

    sdk.pushEventOnConnection(
      connectionId,
      speakerEvent.toJSON(),
      (err) => {
        if (err) {
          console.error('Error during push event.', err);
        } else {
          console.log('Event pushed!');
        }
      }
    );

    // Scheduling stop endpoint call after 60 seconds for demonstration purposes
    // In real adoption, sdk.stopEndpoint() should be called when the meeting or call actually ends

    setTimeout(() => {
      sdk.stopEndpoint({
        connectionId: connection.connectionId,
      }).then(() => {
        console.log('Stopped the connection');
        console.log('Summary Info:', connection.summaryInfo);
        console.log('Conversation ID:', connection.conversationId);
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 60000);
  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

Setting the timestamp for speakerEvent is optional but it's recommended to provide accurate timestamps in the events when they occurred to get more precision.

Events can be pushed to an on-going connection to have them processed. The code snippet to the right shows a simple example.

Every event must have a type to define the purpose of the event at a more granular level, usually to indicate different activities associated with the event resource. For example - A "speaker" event can have type as started_speaking. An event may have additional fields specific to the event.

Currently, Symbl only supports the speaker event which is described below.

Speaker Event

The speaker event is associated with different individual attendees in the meeting or session. An example of a speaker event is shown below.

In the code example the user needs to have userId field to uniquely identify the user.

Speaker Event has the following types:

started_speaking

This event contains the details of the user who started speaking with the timestamp in ISO 8601 format when he started speaking.

const speakerEvent = new SpeakerEvent({
  type: SpeakerEvent.types.startedSpeaking,
  timestamp: new Date().toISOString(),
  user: {
    userId: 'john@example.com',
    name: 'John'
  }
});

stopped_speaking

This event contains the details of the user who stopped speaking with the timestamp in ISO 8601 format when he stopped speaking.

const speakerEvent = new SpeakerEvent({
  type: SpeakerEvent.types.stoppedSpeaking,
  timestamp: new Date().toISOString(),
  user: {
    userId: 'john@example.com',
    name: 'John'
  }
});


As shown in the above examples, it's ok to reuse the same speakerEvent instance per user, by changing the event's type to optimize by reducing the number of instances for SpeakerEvent.

A startedSpeaking event is pushed on the on-going connection. You can use pushEventOnConnection() method from the SDK to push the events.

Outbound Integrations

Symbl.ai currently offers Email and Calendar as out of the box integrations. However, this can be extended to any work tool where the actionable insights need to be pushed to enhance productivity and reduce the time taken by users to manually enter information from conversations.

Some of the examples of these work tools could be:

Complete Example

const {
  sdk,
  SpeakerEvent
} = require('symbl-node');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.symbl.ai'
}).then(() => {

  console.log('SDK Initialized');
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn',
      phoneNumber: '14087407256',
      dtmf: '6327668#'
    }
  }).then(connection => {

    const connectionId = connection.connectionId;
    console.log('Successfully connected.', connectionId);
    const speakerEvent = new SpeakerEvent({
      type: SpeakerEvent.types.startedSpeaking,
      user: {
        userId: 'john@example.com',
        name: 'John'
      }
    });

    setTimeout(() => {
      speakerEvent.timestamp = new Date().toISOString();
      sdk.pushEventOnConnection(
        connectionId,
        speakerEvent.toJSON(),
        (err) => {
          if (err) {
            console.error('Error during push event.', err);
          } else {
            console.log('Event pushed!');
          }
        }
      );
    }, 2000);

    setTimeout(() => {
      speakerEvent.type = SpeakerEvent.types.stoppedSpeaking;
      speakerEvent.timestamp = new Date().toISOString();

      sdk.pushEventOnConnection(
        connectionId,
        speakerEvent.toJSON(),
        (err) => {
          if (err) {
            console.error('Error during push event.', err);
          } else {
            console.log('Event pushed!');
          }
        }
      );
    }, 12000);

    // Scheduling stop endpoint call after 60 seconds
    setTimeout(() => {
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
        console.log('Summary Info:', connection.summaryInfo);
        console.log('Conversation ID:', connection.conversationId);
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 90000);

  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

Below is a quick simulated speaker event example that

  1. Initializes the SDK
  2. Initiates a connection with an endpoint
  3. Sends a speaker event of type startedSpeaking for user John
  4. Sends a speaker event of type stoppedSpeaking for user John
  5. Ends the connection with the endpoint

Strictly for illustration and understanding purposes, the code to the right pushes events by simply using setTimeout() method periodically, but in real usage, they should be pushed as they occur.

Send Summary Email

const {
  sdk,
  SpeakerEvent
} = require('symbl-node');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.symbl.ai'
}).then(() => {
  console.log('SDK Initialized');
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn',
      phoneNumber: '14087407256',
      dtmf: '6327668#'
    },
    actions: [{
      "invokeOn": "stop",
      "name": "sendSummaryEmail",
      "parameters": {
        "emails": [
          "john@exmaple.com",
          "mary@example.com",
          "jennifer@example.com"
        ]
      }
    }],
    data: {
      session: {
        name: 'My Meeting Name' // Title of the Meeting
      },
      users: [{
          user: {
            name: "John",
            userId: "john@example.com",
            role: "organizer"
          }
        },
        {
          user: {
            name: "Mary",
            userId: "mary@example.com"
          }
        },
        {
          user: {
            name: "John",
            userId: "jennifer@example.com"
          }
        }
      ]
    }
  }).then((connection) => {
    console.log('Successfully connected.');

    // Events pushed in between
    setTimeout(() => {
      // After successful stop endpoint, an email with summary will be sent to "john@example.com" and "jane@example.com"
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
        console.log('Summary Info:', connection.summaryInfo);
        console.log('Conversation ID:', connection.conversationId);
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 30000);

  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

This is an example of the summary page you can expect to receive at the end of your call

Summary Page

Tuning your Summary Page

You can choose to tune your Summary Page with the help of query parameters to play with different configurations and see how the results look.

Query Parameters

You can configure the summary page by passing in the configuration through query parameters in the summary page URL that gets generated at the end of your meeting. See the end of the URL in this example:

https://meetinginsights.symbl.ai/meeting/#/eyJ1...I0Nz?insights.minScore=0.95&topics.orderBy=position

Query Parameter Default Value Supported Values Description
insights.minScore 0.8 0.5 to 1.0 Minimum score that the summary page should use to render the insights
insights.enableAssignee false [true, false] Enable to disable rending of the assignee and due date of the insight
insights.enableAddToCalendarSuggestion true [true, false] Enable to disable add to calendar suggestion when applicable on insights
insights.enableInsightTitle true [true, false] Enable or disable the title of an insight. The title indicates the originating person of the insight and if assignee of the insight.
topics.enabled true [true, false] Enable or disable the summary topics in the summary page
topics.orderBy 'score' ['score', 'position'] Ordering of the topics.

score - order topics by the topic importance score.

position - order the topics by the position in the transcript they surfaced for the first time

Voice API

The Voice API provides the REST interface for adding Symbl to your call and generating actionable insights from your conversations.

POST Authentication

If you don't already have your app id or app secret, log in to the platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.symbl.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.symbl.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.symbl.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

$ npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

POST Telephony

Example API Call

curl -k -X POST "https://api.symbl.ai/v1/endpoint:connect" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -H "x-api-key: <your_auth_token>" \
     -d @location_of_fileName_with_request_payload
const request = require('request');

const payload = {
  "operation": "start",
  "endpoint": {
    "type" : "pstn",
    "phoneNumber": "<number_to_call>", // include country code 
    "dtmf": "<meeting_id>" // if password protected, use "dtmf": "<meeting_id>#,#<password>#"
  },
  "actions": [{
    "invokeOn": "stop",
    "name": "sendSummaryEmail",
    "parameters": {
      "emails": [
        "joe.symbl@example.com"
      ]
    }
  }],
  "data" : {
      "session": {
          "name" : "My Meeting"
      }
  } 
}

const your_auth_token = "<your_auth_token>";

request.post({
    url: 'https://api.symbl.ai/v1/endpoint:connect',
    headers: {'x-api-key': your_auth_token},
    body: payload,
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above command returns an object structured like this:

{
    "eventUrl": "https://api.symbl.ai/v1/event/771a8757-eff8-4b6c-97cd-64132a7bfc6e",
    "resultWebSocketUrl": "wss://api.symbl.ai/events/771a8757-eff8-4b6c-97cd-64132a7bfc6e",
    "connectionId": "771a8757-eff8-4b6c-97cd-64132a7bfc6e",
    "conversationId": "51356232423"
}

The Telephony Voice API allows you to easily use Symbl's Language Insights capabilities.

It exposes the functionality of Symbl to dial-in to the conference. Supported endpoints are given below. Additionally, events can be passed for further processing. The supported types of events are discussed in detail in the section below.

HTTP REQUEST

POST https://api.symbl.ai/v1/endpoint:connect

Request Parameters

Parameter Type Description
operation string enum([start, stop]) - Start or Stop connection
endpoint object Object containing Type of the session - either pstn or sip, phoneNumber which is the meeting number symbl should call with country code prepended and dtmf which is the conference passcode.
actions list actions that should be performed while this connection is active. Currently only one action is supported - sendSummaryEmail
data object Object containing a session object which has a field name corresponding to the name of the meeting

Response Object

Field Description
eventUrl REST API to push speaker events as the conversation is in progress, to add additional speaker context in the conversation. Example - In an on-going meeting, you can push speaker events
resultWebSocketUrl Same as eventUrl but over WebSocket. The latency of events is lower with a dedicated WebSocket connection.
connectionId Ephemeral connection identifier of the request, to uniquely identify the telephony connection. Once the connection is stopped using “stop” operation, or is closed due to some other reason, the connectionId is no longer valid
conversationId Represents the conversation - this is the ID that needs to be used in conversation api to access the conversation


To play around with a few examples, we recommend a REST client called Postman. Simply tap the button below to import a pre-made collection of examples.

Run in Postman

Try it out

When you have started the connection through the API, try speaking the following sentences and view the summary email that gets generated:

WS Realtime Websocket

In the example below, we've used the websocket npm package for WebSocket Client, and mic for getting the raw audio from microphone.

$ npm i websocket mic

For this example, we are using your mic to stream audio data. You will most likely want to use other inbound sources for this

const WebSocketClient = require('websocket').client;

const mic = require('mic');

const micInstance = mic({
  rate: '44100',
  channels: '1',
  debug: false,
  exitOnSilence: 6
});

// Get input stream from the microphone
const micInputStream = micInstance.getAudioStream();
let connection = undefined;

Create a websocket client instance

const ws = new WebSocketClient();

ws.on('connectFailed', (err) => {
  console.error('Connection Failed.', err);
});

ws.on('connect', (connection) => {

  // Start the microphone
  micInstance.start();

  connection.on('close', () => {
    console.log('WebSocket closed.')
  });

  connection.on('error', (err) => {
    console.log('WebSocket error.', err)
  });

  connection.on('message', (data) => {
    if (data.type === 'utf8') {
      const {
        utf8Data
      } = data;
    console.log(utf8Data);  // Print the data for illustration purposes
    }
  });

  console.log('Connection established.');

  connection.send(JSON.stringify({
    "type": "start_request",
    "insightTypes": ["question", "action_item"],
    "config": {
      "confidenceThreshold": 0.9,
      // "timezoneOffset": 480, // Your timezone offset from UTC in minutes
      "languageCode": "en-US",
      "speechRecognition": {
        "encoding": "LINEAR16",
        "sampleRateHertz": 44100 // Make sure the correct sample rate is provided for best results
      },
      "meetingTitle": "Client Meeting"
    },
    "speaker": {
      "userId": "jane.doe@example.com",
      "name": "Jane"
    }
  }));

  micInputStream.on('data', (data) => {
    connection.send(data);
  });
});

For this example, we timeout our call after 2 minutes but you would most likely want to make the stop_request call when your websocket connection ends

  // Schedule the stop of the client after 2 minutes (120 sec)

  setTimeout(() => {
    micInstance.stop();
    // Send stop request
    connection.sendUTF(JSON.stringify({
      "type": "stop_request"
    }));
    connection.close();
  }, 120000);

Generate the token and replace it in the placeholder <accessToken>. Once the code is running, start speaking and you should see the message_response and insight_response messages getting printed on the console.

ws.connect(
  'wss://api.symbl.ai/v1/realtime/insights/1',
  null,
  null,
  { 'X-API-KEY': '<accessToken>' }
);

In the example below, we've used a Websocket that is compatible with most browsers and doesn't need any additional npm packages. Generate the token and replace it in the placeholder <accessToken>. Then, create the Websocket.

let url = `wss://api.symbl.ai/v1/realtime/insights/1?access_token=${'<accessToken>'}`
let ws = new WebSocket(url);

ws.onerror = (err) => {
  console.error('Connection Failed.', err);
};
ws.onopen = () => {
  console.log('Websocket open.')

  ws.onmessage = (event) => {
    if (event.type === 'message') {
      console.log(event.data);  // Print the data for illustration purposes
    }
  };

  ws.onclose = () => {
    console.log('WebSocket closed.')
  };

  ws.onerror = (err) => {
    console.log('WebSocket error.', err)
  };

  console.log('Connection established.');

  ws.send(JSON.stringify({
    "type": "start_request",
    "insightTypes": ["question", "action_item"],
    "config": {
      "confidenceThreshold": 0.9,
      // "timezoneOffset": 480, // Your timezone offset from UTC in minutes
      "languageCode": "en-US",
      "speechRecognition": {
        "encoding": "LINEAR16",
        "sampleRateHertz": 44100 // Make sure the correct sample rate is provided for best results
      },
      "meetingTitle": "Client Meeting"
    },
    "speaker": {
      "userId": "jane.doe@example.com",
      "name": "Jane"
    }
  }));
}

To get direct access to the mic, we're going to use an API in the WebRTC specification called getUserMedia().

Once the code is running, start speaking and you should see the message_response and insight_response messages getting printed on the console.

  const handleSuccess = function(stream) {
    const context = new AudioContext();
    const source = context.createMediaStreamSource(stream);
    const processor = context.createScriptProcessor(1024, 1, 1);
    source.connect(processor);
    processor.connect(context.destination);
    processor.onaudioprocess = function(e) {
      // convert to 16-bit payload
      const inputData = e.inputBuffer.getChannelData(0) || new Float32Array(this.options.bufferSize);
      const targetBuffer = new Int16Array(inputData.length);
      for (let index = inputData.length; index > 0; index--)
          targetBuffer[index] = 32767 * Math.min(1, inputData[index]);
      // Send to websocket
      if(ws.readyState === WebSocket.OPEN){
          ws.send(targetBuffer.buffer);
      }
    };
  };

  navigator.mediaDevices.getUserMedia({ audio: true, video: false })
    .then(handleSuccess);

  // Schedule the stop of the client after 2 minutes (120 sec)
  setTimeout(() => {
    // Send stop request
    ws.send(JSON.stringify({
      "type": "stop_request"
    }));
    ws.close();
  }, 120000);

Introduction

The WebSocket based real-time API by Symbl provides the direct, fastest and most accurate of all other interfaces to push the audio stream in real-time, and get the results back as soon as they're available.

Connection Establishment

This is a WebSocket endpoint, and hence it starts as an HTTP request that contains HTTP headers that indicate the client's desire to upgrade the connection to a WebSocket instead of using HTTP semantics. The server indicates its willingness to participate in the WebSocket connection by returning an HTTP 101 Switching Protocols response. After the exchange of this handshake, both client and service keep the socket open and begin using a message-based protocol to send and receive information. Please refer to WebSocket Specification RFC 6455 for the more in-depth understanding of the Handshake process.

Message Formats

Client and Server both can send messages after the connection is established. According to RFC 6455, WebSocket messages can have either a text or a binary encoding. The two encodings use different on-the-wire formats. Each format is optimized for efficient encoding, transmission, and decoding of the message payload.

Text Message

Text message over WebSocket must use UTF-8 encoding. Text Message is the serialized JSON message. Every text message has a type field to specify the type or the purpose of the message.

Binary Message

Binary WebSocket messages carry a binary payload. For the Real-time API, audio is transmitted to the service by using binary messages. All other messages are the Text messages.

Client Messages

This section describes the messages that originate from the client and are sent to service. The types of messages sent by the client are start_request, stop_request and binary messages containing audio.

Configuration

Main Message Body

Field Required Supported Values Description
type true start_request, stop_request Type of message
insightTypes false action_item, question Types of insights to return. If not provided, no insights will be returned.
config false Configuration for this request. See the config section below for more details.
speaker false Speaker identity to use for audio in this WebSocket connection. If omitted, no speaker identification will be used for processing. See below.

config

Field Required Supported Values Default Value Description
confidenceThreshold false 0.0 - 1.0 0.5 Minimum Confidence score that should be met for API to consider it as valid insight, if not provided defaults to 0.5 i.e. 50% or more
<!-- timezoneOffset false 0
languageCode false en-US The language code as per the BCP 47 specification
speechRecognition false Speaker identity to use for audio in this WebSocket connection. If omitted, no speaker identification will be used for processing. See below.

speechRecognition

Field Required Supported Values Default Value Description
encoding false LINEAR16, FLAC, MULAW LINEAR16 Audio Encoding in which the audio will be sent over the WebSocket.
sampleRateHertz false 16000 The rate of the incoming audio stream.

speaker

Field Required Description
userId false Any user identifier for the user.
name false Display name of the user.

Messages

Start Request

{
  "type": "start_request",
  "insightTypes": ["question", "action_item"],
  "config": {
    "confidenceThreshold": 0.9,
    <!-- "timezoneOffset": 480, -->
    "languageCode": "en-US",
    "speechRecognition": {
      "encoding": "LINEAR16",
      "sampleRateHertz": 16000
    }
  },
  "speaker": {
    "userId": "jane.doe@example.com",
    "name": "Jane"
  }
}


This is a request to start the processing after the connection is established. Right after this message has been sent, the audio should be streamed, any binary audio streamed before the receipt of this message will be ignored.

Stop Request

{
  "type": "stop_request"
}


This is a request to stop the processing. After the receipt of this message, the service will stop any processing and close the WebSocket connection.

Example of the message_response object

{
  "type": "message_response",
  "messages": [
    {
      "from": {
        "name": "Jane",
        "userId": "jane.doe@example.com"
      },
      "payload": {
        "content": "I was very impressed by your profile, and I am excited to know more about you.",
        "contentType": "text/plain"
      }
    },
    {
      "from": {
        "name": "Jane",
        "userId": "jane.doe@example.com"
      },
      "payload": {
        "content": "So tell me, what is the most important quality that you acquired over all of your professional career?",
        "contentType": "text/plain"
      }
    }
  ]
}

Sending Binary Messages with Audio

The client needs to send the audio to Service by converting the audio stream into a series of audio chunks. Each chunk of audio carries a segment of audio that needs to be processed. The maximum size of a single audio chunk is 8,192 bytes.

Service Messages

This section describes the messages that originate in Service and are sent to the client.

Service sends mainly two types of messages (message_response, insight_response) to the client as soon as they're available.

Message Response

The message_response contains the processed messages as soon as they're ready and available, in the processing of continuous audio stream. This message does not contain any insights.

Insight Response

Example of the insight_response object

{
  "type": "insight_response",
  "insights": [
    {
      "type": "question",
      "text": "So tell me, what is the most important quality that you acquired over all of your professional career?",
      "confidence": 0.9997962117195129,
      "hints": [],
      "tags": []
    },
    {
      "type": "action_item",
      "text": "Jane will look into the requirements on the hiring for coming financial year.",
      "confidence": 0.9972074778643447,
      "hints": [],
      "tags": [
        {
          "type": "person",
          "text": "Jane",
          "beginOffset": 0,
          "value": {
            "value": {
              "name": "Jane",
              "alias": "Jane",
              "userId": "jane.doe@symbl.ai"
            }
          }
        }
      ]
    }
  ]
}

The insight_response contains the insights from the ongoing conversation as soon as they are available. This message does not contain any messages.

Async API preview

The Async API provides a REST interface to allow you to run a job asynchronously in order to process insights out of audio and text files.

POST Authentication

If you don't already have your app id or app secret, log in to the platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.symbl.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.symbl.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.symbl.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

$ npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

POST Async Audio API

The Async Audio API allows you to process an audio file and return the full text transcript along with conversational insights. It can be utilized for any use case where you have access to recorded audio and want to extract insights and other conversational attributes supported by Symbl's Conversation API.

Use the POST API to upload your file and generate a Conversation ID. If you want to append additional audio information to the same Conversation ID, use the PUT API.

Example API call - The sample request accepts just the raw audio file from the data with the MIME typeset in the Content-Type Header. The audio file should only have Mono Channel.

# Wave file
curl --location --request POST 'https://api.symbl.ai/v1/process/audio?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: audio/wav' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/audio.wav'

# MP3 File
curl --location --request POST 'https://api.symbl.ai/v1/process/audio?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: audio/mpeg' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/audio.mp3'
const request = require('request');
const fs = require('fs');

const your_auth_token = "<your_auth_token>";
const audio_file = fs.createReadStream('/file/location/audio.wav');

request.post({
  url: 'https://api.symbl.ai/v1/process/audio',
  headers: {'x-api-key': your_auth_token, 'Content-Type': 'audio/wav'},
  qs: {
    webhookUrl: your_webhook_url
  },
  body: audio_file,
  json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

POST https://api.symbl.ai/v1/process/audio

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes Describes the format and codec of the provided audio data. Accepted values are audio/wav and audio/mpeg

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object on Success

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Response on reaching limit

Field Desciption
Payload { "message" : "This API has a limit of maximum of 5 number of concurrent jobs per account. If you are looking to scale, and need more concurrent jobs than this limit, please contact us at support@symbl.ai"}
Header { "statusCode" : 429}

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [scheduled, in_progress, completed])

POST Async Video API

The Async Video API allows you to process a video file to get the transcription and conversational insights. It can be useful in any use case where you have access to the video file of any type of conversation, and you want to extract the insightful items supported by the Conversation API.

Example API call - The sample request accepts a raw video file from the data with the MIME typeset in the Content-Type Header.

# Wave file
curl --location --request POST 'https://api.symbl.ai/v1/process/video?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: video/mp4' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/your_video.mp4'
const request = require('request');
const fs = require('fs');

const your_auth_token = "<your_auth_token>";
const video_file = fs.createReadStream('/file/location/your_video.mp4');

request.post({
  url: 'https://api.symbl.ai/v1/process/video',
  headers: {'x-api-key': your_auth_token, 'Content-Type': 'video/mp4'},
  qs: {
    webhookUrl: your_webhook_url
  },
  body: video_file,
  json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

POST https://api.symbl.ai/v1/process/video

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes Describes the format and codec of the provided video. Accepted value video/mp4

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded video. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object on Success

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Response on reaching limit

Field Desciption
Payload { "message" : "Too Many Requests"}
Header { "statusCode" : 429}

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [in_progress, completed])

POST Async Text API

The Async Text API allows you to process any text payload to get the transcription and conversational insights. It can be useful in any use case where you have access to the textual content of a type of conversation, and you want to extract the insightful items supported by the Conversation API. If you want to add more content to the same conversation, use PUT Async Audio API.

Use the POST API to upload your content and generate a Conversation ID. If you want to append additional content to the same Conversation ID, use the PUT API.

Example API call

const request = require('request');

const options = {
  'method': 'POST',
  'url': 'https://api.symbl.ai/v1/process/text',
  'headers': {
    'Content-Type': 'application/json',
    'x-api-key': '<your_auth_token>'
  },
  body: JSON.stringify({
    "messages": [
      {
        "payload": { "content": "Okay" },
        "from": {
          "name": "John",
          "userId": "john@example.com"
        }
      },
      {
        "payload": { 
          "content": "Hello, this is Peter from Vodafone, How can I help you today?. My name is Sam, and I've been gone for more than two years. I'm really interested in upgrading to the latest iPhone. Can you tell me about some options? For quality assurance and training purposes. This call may be monitored and recorded. May I have your current phone number and the complete name and address of the current account My number is 1 2 3 5 5 5 7 8 9 0 and my address is 122 Raymer Avenue Seattle 98010. Thank you for the confirmation. Being a loyal customer there are three types of plan options that I can offer you today. Do you already know what you're looking for or would you prefer a recommendation?" 
        }, 
        "from": { 
          "name": "John", 
          "userId": "john@example.com" 
        }
      },
      // ....
    ],
    "confidenceThreshold": 0.5 
  })
};

request(options, function (error, response) { 
  if (error) throw new Error(error);
  console.log(response.body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

POST https://api.symbl.ai/v1/process/text

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes application/json

Request Body

Field Required Type Supported Values Default Description
messages Yes list Input Messages to look for insights. See the messages section below for more details.
confidenceThreshold No double 0.0 to 1.0 0.5 Minimum required confidence for the insight to be recognized.

messages

Field Required Type Description
payload Yes object Input Messages to look for insights. See the messages section below for more details.
from No object Information about the User information produced the content of this message.
duration No object Duration object containing startTime and endTime for the transcript.

payload

Field Required Type Default Description
contentType No string (MIME type) text/plain To indicate the type and/or format of the content. Please see RFC 6838 for more details. Currently only text/plain is supported.
from No object The content of the message in the specified MIME type in the contentType field.

from (user)

Field Required Type Description
name No string Name of the user.
userId No string A unique identifier of the user. E-mail ID is usually a preferred identifier for the user.

duration

Field Required Type Description
startTime No DateTime The start time for the particular text content.
endTime No DateTime The start time for the particular text content.

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Response on reaching limit

Field Desciption
Payload { "message" : "This API has a limit of maximum of 5 number of concurrent jobs per account. If you are looking to scale, and need more concurrent jobs than this limit, please contact us at support@symbl.ai"}
Header { "statusCode" : 429}

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [scheduled, in_progress, completed])

PUT Async Audio API

The Async Audio API allows you to process an additional audio file to the previous conversation, append the transcription and get conversational insights for updated conversation. It can be useful in any use case where you have access to multiple audio files of any type of conversation, and you want to extract the insightful items supported by the Conversation API.

Use the POST API to upload your file and generate a Conversation ID. If you want to append additional audio information to the same Conversation ID, use the PUT API.

Example API call - The sample request accepts just the raw audio file from the data with the MIME typeset in the Content-Type Header. The audio file should only have Mono Channel.

# Wave file
curl --location --request PUT 'https://api.symbl.ai/v1/process/audio/:conversationId?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: audio/wav' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/audio.wav'

# MP3 File
curl --location --request PUT 'https://api.symbl.ai/v1/process/audio/:conversationId?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: audio/mpeg' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/audio.mp3'
const request = require('request');
const fs = require('fs');

const your_auth_token = "<your_auth_token>";
const audio_file = fs.createReadStream('/file/location/audio.wav');

request.put({
  url: 'https://api.symbl.ai/v1/process/audio/'+ your_conversationId,
  headers: {'x-api-key': your_auth_token, 'Content-Type': 'audio/wav'},
  qs: {
    webhookUrl: your_webhook_url
  },
  body: audio_file,
  json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

PUT https://api.symbl.ai/v1/process/audio/:conversationId

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes Describes the format and codec of the provided audio data. Accepted values are audio/wav and audio/mpeg

Path Params

Parameter value
conversationId conversationId which is provided by the first request submitted using POST async audio API

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [scheduled, in_progress, completed])

PUT Async Video API

The Async Video API allows you to process an additional audio file to the previous conversation, append the transcription and get conversational insights for updated conversation. It can be useful in any use case where you have access to multiple video files of any type of conversation, and you want to extract the insightful items supported by the Conversation API.

Example API call - The sample request accepts just the raw video file from the data with the MIME typeset in the Content-Type Header.

# MP4 File
curl --location --request PUT 'https://api.symbl.ai/v1/process/video/:conversationId?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: video/mp4' \
--header 'x-api-key: <generated_valid_token>' \
--data-binary '@/file/location/your_video.mp4'
const request = require('request');
const fs = require('fs');

const your_auth_token = "<your_auth_token>";
const video_file = fs.createReadStream('/file/location/your_video.mp4');

request.put({
  url: 'https://api.symbl.ai/v1/process/audio/'+ your_conversationId,
  headers: {'x-api-key': your_auth_token, 'Content-Type': 'video/mp4'},
  qs: {
    webhookUrl: your_webhook_url
  },
  body: video_file,
  json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

PUT https://api.symbl.ai/v1/process/audio/:conversationId

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes Describes the format and codec of the provided video data. Accepted value video/mp4

Path Params

Parameter value
conversationId conversationId which is provided by the first request submitted using POST async video API

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded video. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [in_progress, completed])

PUT Async Text API

The Async Text API allows you to process any text payload to append the transcription of the previous conversation and get updated conversational insights. It can be useful in any use case where you have access to the textual content of a type of conversation, and you want to extract the insightful items supported by the Conversation API.

Use the POST API to upload your content and generate a Conversation ID. If you want to append additional content to the same Conversation ID, use the PUT API.

Example API call

const request = require('request');

const options = {
  'method': 'PUT',
  'url': 'https://api.symbl.ai/v1/process/text/' + your_conversationId,
  'headers': {
    'Content-Type': 'application/json',
    'x-api-key': '<your_auth_token>'
  },
  'body': JSON.stringify({
    "messages": [
      {
        "payload": { "content": "Okay" },
        "from": {
          "name": "John",
          "userId": "john@example.com"
        }
      },
      {
        "payload": { 
          "content": "Hello, this is Peter from Vodafone, How can I help you today?. My name is Sam, and I've been gone for more than two years. I'm really interested in upgrading to the latest iPhone. Can you tell me about some options? For quality assurance and training purposes. This call may be monitored and recorded. May I have your current phone number and the complete name and address of the current account My number is 1 2 3 5 5 5 7 8 9 0 and my address is 122 Raymer Avenue Seattle 98010. Thank you for the confirmation. Being a loyal customer there are three types of plan options that I can offer you today. Do you already know what you're looking for or would you prefer a recommendation?" 
        }, 
        "from": { 
          "name": "John", 
          "userId": "john@example.com" 
        }
      },
      // ....
    ],
    "confidenceThreshold": 0.5 
  })
};

request(options, function (error, response) { 
  if (error) throw new Error(error);
  console.log(response.body);
});

The above request returns a response structured like this:

{
  "conversationId": "5815170693595136",
  "jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d" 
}

HTTP REQUEST

PUT https://api.symbl.ai/v1/process/text/:conversationId

Request Headers

Header Name Required Value
x-api-key Yes your_auth_token
Content-Type Yes application/json

Path Params

Parameter Value
conversationId ConversationId got from the POST Async API for the text content

Request Body

Field Required Type Supported Values Default Description
messages Yes list Input Messages to look for insights. See the messages section below for more details.
confidenceThreshold No double 0.0 to 1.0 0.5 Minimum required confidence for the insight to be recognized.

messages

Field Required Type Description
payload Yes object Input Messages to look for insights. See the messages section below for more details.
from No object Information about the User information produced the content of this message.
duration No object Duration object containing startTime and endTime for the transcript.

payload

Field Required Type Default Description
contentType No string (MIME type) text/plain To indicate the type and/or format of the content. Please see RFC 6838 for more details. Currently only text/plain is supported.
from No object The content of the message in the specified MIME type in the contentType field.

from (user)

Field Required Type Description
name No string Name of the user.
userId No string A unique identifier of the user. E-mail ID is usually a preferred identifier for the user.

duration

Field Required Type Description
startTime No DateTime The start time for the particular text content.
endTime No DateTime The start time for the particular text content.

Query Params

Parameter Required Value
webhookUrl No Webhook url on which job updates to be sent. (This should be post API)

WebhookUrl will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl

Response Object

Field Description
conversationId ID to be used with Conversation API
jobId ID to be used with Job API

Webhook Payload

Field Description
jobId ID to be used with Job API
status Current status of the job. (Valid statuses - [scheduled, in_progress, completed])

Conversation API

The Conversation API provides the REST API interface for the management and processing of your conversations

POST Authentication

If you don't already have your app id or app secret, log in to the platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.symbl.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.symbl.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.symbl.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

$ npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

GET conversation

Returns the conversation meta-data

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "id": "5179649407582208",
    "type": "meeting",
    "name": "Project Meeting #2",
    "startTime": "2020-02-12T11:32:08.000Z",
    "endTime": "2020-02-12T11:37:31.134Z",
    "members": [
        {
            "name": "John",
            "email": "John@example.com"
        },
        {
            "name": "Mary",
            "email": "Mary@example.com"
        },
        {
            "name": "Roger",
            "email": "Roger@example.com"
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}

Response Object

Field Description
id unique conversation identifier
type conversation type. default is meeting
name name of the conversation
startTime DateTime value
endTime DateTime value
members list of member objects containing name and email if detected

GET messages in a conversation

Returns a list of all the messages in a conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/messages

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/messages" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/messages',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "messages": [
        {
            "id": "5659996670918656",
            "text": "Sign something a little further.",
            "from": {
                "name": "Mary",
                "email": "Mary@example.com"
            },
            "startTime": "2020-02-12T11:32:21.383Z",
            "endTime": "2020-02-12T11:32:22.983Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        },
        {
            "id": "5732040452341760",
            "text": "I guess we won't.",
            "from": {
                "name": "Roger",
                "email": "Roger@example.com"
            },
            "startTime": "2020-02-12T11:32:23.883Z",
            "endTime": "2020-02-12T11:32:24.582Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        },
        {
            "id": "5630620503900160",
            "text": "Get too much more info on that.",
            "from": {
                "name": "John",
                "email": "John@example.com"
            },
            "startTime": "2020-02-12T11:32:24.582Z",
            "endTime": "2020-02-12T11:32:26.383Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/messages

Response Object

Field Description
id unique message identifier
text message text
from user object with name and email
startTime DateTime value
endTime DateTime value
transcriptionId unique transcript identifier
conversationId unique conversation identifier

GET all members in a conversation

Returns a list of all the members in a conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/members

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/members" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/members',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "members": [
        {
            "name": "John",
            "email": "John@example.com"
        },
        {
            "name": "Mary",
            "email": "Mary@example.com"
        },
        {
            "name": "Roger",
            "email": "Roger@example.com"
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/members

Response Object

Field Description
name member's name
email member's email

GET insights from a conversation

Returns all the insights in a conversation including Topics, Questions and Action Items

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/insights

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/insights" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/insights',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "insights": [
        {
            "id": "5179649407582208",
            "text": "Push them for the two weeks delivery, right?",
            "type": "question",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": []
        },
        {
            "id": "5633940379402240",
            "text": "Mary thinks we need to go ahead with the TV in Bangalore.",
            "type": "action_item",
            "score": 0.8659442937321238,
            "messageIds": [
                "20c6b55a-4da6-45a5-bbea-b7c5053684c2"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com",
                "phone": ""
            }
        },
        {
            "id": "5642466493464576",
            "text": "I think what is the Bahamas?",
            "type": "question",
            "score": 0.9119608386876195,
            "messageIds": [
                "538f9cec-a495-42cf-8e94-5c95e54f6b7d"
            ],
            "entities": []
        },
        {
            "id": "5644121934921728",
            "text": "Think we need to have a call with UV.",
            "type": "follow_up",
            "score": 0.8660254121940272,
            "messageIds": [
                "c4611a85-5893-40f8-a2f3-22b1f7eadc63"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com"
            }
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/insights

Response Object

Field Description
id unique conversation identifier
text conversation text
type type of insight. values could be [question, action_item]
score confidence score of the generated insight. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entities in the insight
assignee if an action item is generated, this field contains the name and email of the person assigned to it

GET topics from a conversation

Returns all the topics generated from a conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/topics

Example API call

curl "https://api.symbl.ai/v1/{conversationId}/topics" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/{conversationId}/topics',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "topics": [
        {
            "id": "5179649407582208",
            "text": "speakers",
            "type": "topics",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": [
                {
                    "type": "rootWord",
                    "text": "speakers"
                }
            ]
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/topics

Response Object

Field Description
id unique conversation identifier
text conversation text
type response type. default is topics
score confidence score of the generated topic. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entity objects in the insight with type - entity type and text - corresponding text

GET questions from a conversation

Returns all the questions generated from the conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/questions

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/questions" \ 
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/questions',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "questions": [
        {
            "id": "5179649407582208",
            "text": "Push them for the two weeks delivery, right?",
            "type": "question",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": []
        },
        {
            "id": "5642466493464576",
            "text": "I think what is the Bahamas?",
            "type": "question",
            "score": 0.9119608386876195,
            "messageIds": [
                "538f9cec-a495-42cf-8e94-5c95e54f6b7d"
            ],
            "entities": []
        },
        {
            "id": "5756718797553664",
            "text": "Okay need be detained, or we can go there in person and support them?",
            "type": "question",
            "score": 0.893303149769215,
            "messageIds": [
                "d382c499-c44f-4459-99f9-d984db1b9058"
            ],
            "entities": []
        },
        {
            "id": "6235991715086336",
            "text": "Why is that holiday in US from 17?",
            "type": "question",
            "score": 0.9998053310511206,
            "messageIds": [
                "ab88b466-1378-4cad-af45-0050e8ef097a"
            ],
            "entities": []
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/questions

Response Object

Field Description
id unique conversation identifier
text conversation text
type response type. default is question
score confidence score of the generated topic. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entity objects in the insight with type - entity type and text - corresponding text

GET action items from a conversation

Returns a list of all the action items generated from the conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/action-items

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/action-items" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/action-items',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "actionItems": [
        {
            "id": "5633940379402240",
            "text": "Mary thinks we need to go ahead with the TV in Bangalore.",
            "type": "action_item",
            "score": 0.8659442937321238,
            "messageIds": [
                "20c6b55a-4da6-45a5-bbea-b7c5053684c2"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com"
            }
        },
        {
            "id": "5668855401676800",
            "text": "Call and Stephanie also brought up something to check against what Ison is given as so there's one more test that we want to do.",
            "type": "action_item",
            "score": 0.8660254037845785,
            "messageIds": [
                "fc31a51c-5e18-41ea-a868-fa5065ccfa92"
            ],
            "entities": [],
            "assignee": {
                "name": "John",
                "email": "John@example.com"
            }
        },
        {
            "id": "5690029162627072",
            "text": "Checking the nodes with Eisner to make sure we covered everything so that will be x.",
            "type": "action_item",
            "score": 0.8657734634985154,
            "messageIds": [
                "24239f56-b4b3-4244-96db-1943f5978659"
            ],
            "entities": [],
            "assignee": {
                "name": "John",
                "email": "John@example.com"
            }
        },
        {
            "id": "5707174000984064",
            "text": "Roger is going to work with the TV lab and make sure that test is also included, so we are checking to make sure not only with our complaints.",
            "type": "action_item",
            "score": 0.9999962500210938,
            "messageIds": [
                "6ecb11ea-b311-4fd2-b3b5-f0694c809cc3"
            ],
            "entities": [],
            "assignee": {
                "name": "Roger",
                "email": "Roger@example.com"
            }
        },
        {
            "id": "5757280188366848",
            "text": "Mary thinks it really needs to kick start this week which means the call with UV team and our us team needs to happen the next couple of days.",
            "type": "action_item",
            "score": 0.9999992500008438,
            "messageIds": [
                "262534fa-36a8-4645-8d0f-e4b78e608325"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com"
            },
            "dueBy": "2020-02-10T07:00:00.000Z"
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/action-items

Response Object

Field Description
id unique conversation identifier
text conversation text
type response type. default is action_item
score confidence score of the generated topic. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entity objects in the insight with type - entity type and text - corresponding text
assignee this field contains the name and email of the person assigned to the action item

GET follow ups from a conversation

Returns a list of all the follow ups generated from the conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/follow-ups

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/follow-ups" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/follow-ups',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "followUps": [
        {
            "id": "4526427164639111",
            "text": "We need to have the meeting today, and we're going to talk about how to run a product strategy Workshop is by Richard Holmes.",
            "type": "follow_up",
            "score": 0.8660254037851491,
            "messageIds": [
                "4675554024357888"
            ],
            "entities": [
                {
                    "type": "date",
                    "text": "today",
                    "offset": 28,
                    "value": "2020-06-22"
                },
                {
                    "type": "person",
                    "text": "Richard Holmes",
                    "offset": 110,
                    "value": {
                        "name": "Richard Holmes"
                    }
                }
            ],
            "from": {},
            "assignee": {},
            "dueBy": "2020-06-22T07:00:00.000Z"
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/follow-ups

Response Object

Field Description
id unique conversation identifier
text conversation text
type response type. default is follow_up
score confidence score of the generated topic. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entity objects in the insight with type - entity type and text - corresponding text
from user object with name and email
assignee this field contains the name and email of the person assigned to the follow up

GET intents from a conversation

Returns a list of all the intents generated from the conversation

API Endpoint

https://api.symbl.ai/v1/conversations/{conversationId}/intents

Example API call

curl "https://api.symbl.ai/v1/conversations/{conversationId}/intents" \
    -H "x-api-key: <api_token>"
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/conversations/{conversationId}/intents',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
    console.log(body);
});

The above request returns a response structured like this:

{
    "intents": [
        {
            "id": "4884962113224704",
            "text": "Please don't call me again.",
            "type": "intent",
            "score": 0.9999999999998681,
            "messageIds": [
                "6469876998733824"
            ],
            "entities": [],
            "from": {},
            "intent": "do_not_call",
            "alternatives": [
                {
                    "intent": "not_interested",
                    "score": 0.5624889193061073
                },
                {
                    "intent": "interested",
                    "score": 0.3522290881325452
                }
            ]
        },
        {
            "id": "5475120562831360",
            "text": "No, like I said not really interested, please.",
            "type": "intent",
            "score": 0.8547157326500623,
            "messageIds": [
                "5775616943063040"
            ],
            "entities": [],
            "from": {},
            "intent": "not_interested",
            "alternatives": [
                {
                    "intent": "interested",
                    "score": 0.6695752246310334
                },
                {
                    "intent": "do_not_call",
                    "score": 0.4670510287533007
                }
            ]
        },
        {
            "id": "5538834724945920",
            "text": "No, I'm not really interested into that.",
            "type": "intent",
            "score": 0.8622937824480487,
            "messageIds": [
                "5881586738266112"
            ],
            "entities": [],
            "from": {},
            "intent": "not_interested",
            "alternatives": [
                {
                    "intent": "interested",
                    "score": 0.6928412521319762
                },
                {
                    "intent": "do_not_call",
                    "score": 0.4942100327495625
                }
            ]
        }
    ]
}

HTTP REQUEST

GET https://api.symbl.ai/v1/conversations/{conversationId}/intents

Response Object

Field Description
id unique conversation identifier
text conversation text
type response type. default is intent
score confidence score of the generated topic. value from 0 - 1
messageIds unique message identifiers of the corresponding messages
entities list of detected entity objects in the insight with type - entity type and text - corresponding text
intent can be any one of [ "interested", "not_interested", "do_not_call" ]
alternatives list of other alternative intents passed with their scores

Job API

The Job Status API is used to retrieve the status of an ongoing async audio request. You can use the jobId received in the successful response of the Async API.

POST Authentication

If you don't already have your app id or app secret, log in to the platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.symbl.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.symbl.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.symbl.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

$ npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

GET Job Status

Returns the status of the ongoing Async job request

API Endpoint

https://api.symbl.ai/v1/job/{jobId}

Example API call

curl --location --request GET 'https://api.symbl.ai/v1/job/{jobId}' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <generated_valid_token>'
const request = require('request');
const your_auth_token = '<your_auth_token>';

request.get({
    url: 'https://api.symbl.ai/v1/job/{jobId}',
    headers: {'x-api-key': your_auth_token},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
  "id": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d",
  "status": "in_progress" 
}

HTTP REQUEST

GET https://api.symbl.ai/v1/job/{jobId}

Response Parameters

Parameter Description
id The ID of the Job
status Is one of type completed, failed, in_progress

Testing

Now that you've built out an integration using either the Voice SDK or Voice API, let's test to make sure your integration is working as expected.

  1. If you are dialed in with your phone number, try speaking the following sentences to see the generated output

2. If you are dialed into a meeting, try running any of the following videos with your meeting platform open and view the summary email that gets generated:

3. Try tuning your summary page with query parameters to customize your output.

Errors

// example auth token is incorrect
{
    "message": "Token validation failed for provided token."
}

Symbl uses the following HTTP codes:

Error Code Meaning
200 OK -- Success.
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API key is invalid.
403 Forbidden
404 Not Found -- The specified resource does not exist.
405 Method Not Allowed -- You tried to access an api with an invalid method.
429 Too Many Requests -- Too many requests hit the API too quickly.
500 Internal Server Error -- We had a problem with our server. Try again later.

Resources

Async Audio Conversion

The below snippet shows how you can convert a file from .mp4 to .mp3 using the fluent-ffmpeg node module

const ffmpeg = require('fluent-ffmpeg');

ffmpeg('/my/path/to/original/file.mp4')
    .format('mp3')
    .on('end', () => {
        console.log('end');
    })
    .on('error', (err) => {
        // error
    })
    .save('/path/to/output/file.mp3');

Async Audio API supports files of either .wav or .mp3 and the file must have mono-channel audio only. Any other file formats can be converted using the code snippet from FFmpeg

$ npm install --save fluent-ffmpeg

Overview

The Symbl.ai platform provides conversation intelligence as a service. The platform enables real-time intelligence in business conversations, recognizing action items, insights, questions, contextual topics, summaries, etc.

What is conversational intelligence?

In its pure form, conversation intelligence refers to the ability to communicate in ways that create a shared concept of reality. It begins with trust and transparency to remove biases in decisions, enable participants, such as knowledge workers, to be more effective at their core function, eradicate mundane and repetitive parts of their work and empower participants both at work and beyond.

Here at Symbl.ai, we are using methods of artificial intelligence like machine learning and deep learning to augment human capability by analyzing conversations and surface knowledge and actions that matter.

How is Symbl.ai different from chatbot platforms?

In short, very.

Also, in short: chatbots are intent-based, rule-based, often launched by ‘wake words’, and enable short conversations between humans and machines.

Symbl.ai is a developer platform and service capable of understanding context and meaning in natural conversations between humans. It can surface the things that matter in real-time, e.g. questions, action items, insights, contextual topics, signals, etc.

Slightly longer answer:

Chatbots or virtual assistants are commonly command-driven and often referred to as conversation AI systems. They add value to direct human-machine interaction via auditory or textual methods, and attempt to convincingly simulate how a human would behave in a conversation.

You can build chatbots by using existing intent-based systems like RASA, DialogFlow, Watson, Lex, etc. These systems identify intent based on the training data you provide, and these systems enable you to create rule-based conversation workflows between humans and machines.

We are building a platform that can contextually analyze natural conversations between two or more humans based on the meaning as opposed to keywords or wake words. We are also building it using models that require no training, so you can analyze conversations on both audio or text channels to get recommendations of outcomes without needing to train a custom engine for every new intent.

Imagine embedding a passive intelligence in existing products or workflows, natively. Every bit of conversational data flowing through is parsed and used to surface real-time actions and outcomes.

Next: explore supported use cases

Use Cases

Working closely with early customers and their developers, we have received very positive feedback on several use cases. Among these, in highest demand are use cases for meetings, unified communication and collaboration, customer care, sales enablement, workflow management, and recruitment.

Meetings & UCaaS

Applying primarily to unified communication and collaboration platforms (UCaaS), you can add real-time recommendations of action items and next steps as part of your existing workflow. This would meaningfully improve meeting productivity by surfacing the things that matter, as the meeting occurs. Beyond real-time prompts, take advantage of automated meetings summaries delivered to your preferred channel, like email, chat, Slack, calendar, etc.

Use real-time contextual recommendations to enable participants to drive efficiencies in their note-taking, save time and focus more on the meeting itself. Action items are surfaced contextually and in real-time and can be automated to trigger your existing workflows.

Post-meeting summaries are helpful for users that like to get more involved in the conversation as it happens, and prefer re-visiting information and action items post-meeting.

Benefits:

Customer Care & CCaaS

As we understand it, customer care performance can be measured by 3 proxy metrics: customer satisfaction, time spent on call, and the number of calls serviced.

What if the introduction of a real-time passive conversation intelligence service into each call was to improve all 3 metrics at once? Real-time contextual understanding leads to suggested actions that a customer care agent can act upon during the call, enabling the agent to 1) focus on the human connection with the customer, 2) come to a swifter resolution thanks to task automation, and 3) serve more customers with higher satisfaction during a shift.

Further, the Symbl.ai platform is also capable of automating post-call data collection. This enables analysis of support conversations over time, agents, shifts,and groups, which leads to a better understanding of pain-points, topics of customer support conversation, etc.

Benefits: Support Organization

Sales Enablement & CRM

Digital communication platforms used for sales engagements and customer interactions need to capture conversational data for benchmarking performance, improve net sales, and for identifying and replicating the best-performing sales scripts.

Use Symbl.ai to identify top-performing pitches by leveraging real-time insights. Accelerate the sales cycle by automating suggested action items in real-time, such as scheduling tasks and follow-ups via outbound work tool integrations. Keep your CRM up to date by automating the post-call entry with useful summaries.

Benefits: Sales Agent

Benefits: Sales Enablement / VP of Sales

Social Media Conversations

Customers interact a lot with Brands on social media and other digital channels. These interactions include feedback, reviews, complaints, and a lot of other mentions. This is valuable data if used properly to derive insights for the business.

Symbl's APIs can be used along with social listening tools to extract, categorize all of this into actionable insights. For example, topics can be very helpful in abstracting data from product reviews, threads of conversation, and social media comments. Questions and requests from social interactions and forums can be identified to build a knowledge base and direct the customer conversations to the right resources.

With the right integrations to CRM tools and Knowledgebase, insights from social conversations can lead to better understanding customer sentiment towards the brand and more efficient customer service on social channels.

Benefits for Brands

Next: Learn more about the capabilities of the platform

Capabilities

Transcript

The platform provides a searchable transcript with timecodes and speaker information. The transcript is a refined output of the speech-to-text conversion. Our platform does not carry its own speech-to-text capability; it is compatible with a range of ASR APIs including Google, Amazon, Microsoft, etc.

The transcript is one of the easiest ways to navigate through the entire conversation. It can be sorted using speaker-specific or topic-specific filters. Additionally, each insight or action item can also lead to related parts of the transcript.

Transcripts can be in real-time for voice and video conversations. They can also be accessed through the post-conversation summary UI.

The post-conversation summary page enables editing, copying and sharing of transcripts from the conversation.

Summary Topics

Summary topics provide a quick overview of the key things that were talked about in the conversation. IMPORTANT: summary topics are not detected based on the frequency of their occurrences in the conversation, they are instead detected contextually, and each summary topic is an indication of one or more important topics of discussion in the conversation.

Each summary topic has a score that indicates the importance of that topic in the context of the entire meeting. It is not that rare that even less frequently mentioned things are of higher importance in the conversation, and this will be reflected in a higher score for those topics, even if other summary topics have a high number of mentions in the overall conversation.

Contextual hierarchies

The summary topics have contextual hierarchies in them. High-level topics represent various concepts that the conversation is about, while lower-level topics are aspects of those high-level concepts, which provide a more contextual understanding of the high-level concepts discussed in the conversation.

For example, higher-level concepts could be Pricing or Revenue or Production Issues, etc. and lower-level aspects could be like these -

High Level (Concept) Low Level (Aspect)
Pricing Selling Price, Paying Capacity, Cost-based pricing
Revenue Revenue Growth, Higher Margin, Revenue Model
Production Issues Critical Issue, Downtime, Unstable Production

This table simply shows how the “Aspects” provide more information about one or more “Concepts” in the conversation. By way of this table, you can understand what different aspects of each high-level concept was discussed in any given conversation.

Action Items

An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to take a specific action, e.g. set up a meeting, share a file, complete a task, etc.

Action Item Features

There are various types of actionable parts in the conversation between people and the platform can recognize these various connotations.

Definitive

A definitive connotation is used to indicate the importance, definitiveness, and predictability of a certain action. Usually, this type of action item indicates the commitment to the task.

Examples:

"We need to fix all the critical issues by tomorrow". Here, there is a definitive requirement for a group of people indicated by "we" to fix the critical issues by tomorrow. "Please make sure that the hall is booked for the 25th". Here, even though the tone of the action item is not a command, still the request suggests that this task needs to be completed.

The platform can recognize these types of connotations on top of recognizing the actionable item itself and indicate it in the output.

Non-Definitives

There can be other actionable items that may not be definitive in nature but still, indicate some future action. For example, it can be simply an opinion of someone, to indicate future action.

"I think we should spend more time reviewing the document". Here, to spend more time in review of the document is an opinion of this person but it's not something they are committing to.

Tasks

Definitive action items that are not follow-ups are categorized as tasks.

Example: "I will complete the presentation that needs to be presented to the management by the end of today". Here, a person is really committed to completing the presentation (task) by the end of today.

Follow Ups

The platform can recognize if an action item has a connotation, which requires following up in general or by someone in particular.

Examples:

"I will talk to my manager and find out the agreed dates with the vendor". Here, a person needs to follow up with their manager in order to complete this action.

"Perhaps I can submit the report today". Here, the action of submitting the report is indicated, but the overall connotation of it doesn't indicate the commitment.

Follow-ups can also be non-definitive

Example:

“We’ll need to sync up with the design team to find out more details”. Here, it’s clear that there needs to be a follow-up, but the details on when and how are not defined.

Follow Up Non-Follow Up
Definitive Follow Up (defined data) Task
Non-Definitive Follow Up (non defined) Idea/Opinion

Other Insight Types

Questions

Any explicit question or request for information that comes up during the conversation, whether answered or not, is recognized as a question.

Examples:

“What features are most relevant for our use case?” “How are we planning to design the systems?”

Suggestive Actions

For each of the Action Items identified from the conversation, certain suggestive actions are recommended based on available worktool integrations.

Example:

Outbound Work Tool Integrations

The platform currently offers email and calendar as out-of-box integrations. However, this can be extended to any work tool where the actionable insights need to be pushed to enhance productivity and reduce the time taken by users to manually enter information from conversations. The same integrations can be enabled as suggestive actions to make this even quicker.

Some of the examples of these work tools can be:

Reusable and Customizable UI Components

The pre-built UI components can be widely divided into two areas

  1. Real-time UI components
  2. Summary Page UI

Real-Time UI Components

Real-time UI Components help showcase the transcription, insights and action items during the conversation itself. These are customizable, embeddable components that can be directly used in any product.

Real-time UI components are available for

Summary Page UI

At the end of each conversation, a summary of the conversation is generated and the page URL is shared via email to all (or selected) participants.

The Summary page UI includes the following components

The post conversation summary page is also fully customizable, as per the use case or product requirement.