Async feature reference

When you process a conversation with the Async API, you can also implement a subset of Symbl.ai's conversation intelligence features. This guide briefly describes the available functionality and provides a basic examples of invoking the features.

The following table gives a brief explanation of the different conversation intelligence features that can be used when you process a conversation.

FeatureDescription
Actionable phrasesIdentify phrases in conversations that may contain actionable information. If any are found, these phrases are included when you get a Speech-to-text transcript.
Custom vocabularyFor audio and video sources, custom vocabulary gives hints about known words that occur in the conversation to improve speech recognition.
Entity detectionIdentify entities such as PII, PCI, and PHI data, dates, times, numbers, and more in your conversations. For more information about entities, see Entity detection.
Speaker separationIntelligently identify different speakers in a conversation. For more information, see Apply speaker separation to audio and video recordings.
Transcription for different languagesEnable speech-to-text transcription for any of the supported languages. For more information, see Supported languages.
Track custom phrasesImplement trackers that intelligently identify phrases based on vocabulary that you provide. For more information about trackers, see Trackers.
Conversation summaryCreate an AI-generated summary of the conversation. For more information about summarization, see Summary.

Request parameters

For the Async API, you use the conversation intelligence features by creating parameters for your request. The parameters are the same for each request, though the method of delivery changes if you are submitting files directly:

  • When you submit video and audio files, you specify the values as query parameters.
  • When you submit URLs and text, you specify the values as body parameters.

The following table describes the parameters you can include with in a request.

ParameterTypeDescriptionExample
webhookUrlString, optionalThe webhook URL for your application. When the status of the processing job is updated, the Async API sends an HTTP request to the URL that you specify.'webhookUrl': ' <https://webhook.site/12acab7f-2b5d-1490-89b5-bccb9c59a0d3>'
confidenceThresholdDouble, optionalMinimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to 0.5 and less than or equal to 1.0.).

If not included, the default value is 0.5.
'confidenceThreshold': 0.6
customVocabularyList, optionalContains a list of words and phrases that provide hints to the speech recognition task.'customVocabulary': [ 'marketing director', 'meeting', 'customer' ]
detectPhrasesBoolean, optionalShows Actionable Phrases in each sentence of conversation. These phrases are included when you get a Speech-to-text transcript.

If not included, the default value is false.
'detectPhrases': true
detectEntitiesBoolean, optionalEnables the detection of entities in your conversation.

For more information, see Entity detection.

If not included, the default value is true.
'detectEntities': true
enableAllTrackers Boolean, optionalEnables detection of all trackers that you have created.

For more information, see Trackers.

Enabled by default. To disable, change the value to false.
'enableAllTrackers': true
trackers List, optionalAdds custom trackers that can be detected only in that conversation.

Requires that enableAllTrackers is true.
'trackers': [ { 'name': 'gratitude', 'vocabulary': [ 'thanks', 'thank you', 'thank you for your time', 'i appreciate it', 'we apppreciate it' ] } ]
enableSpeakerDiarizationBoolean, optionalEnables intelligent speaker separation. If enabled, you must also include speakerDiarizationCount.

If not included, the default value is false.
'enableSpeakerDiarization': true
diarizationSpeakerCountInteger, required if enableSpeakerDiarization is trueSets the expected number of speakers for speaker separation.'diarizationSpeakerCount': 2
enableSummaryBoolean, optionalGenerates a summary for the conversation as it is processed.

If not included, the default value is false.
'enableSummary': true
languageCodeString, optionalSpecifies the language to use for Speech-to-text transcripts.

For more information, see Supported languages.

If not included, the default value is en-US.
'languageCode': 'es-ES'
modeString, optionalValid values are phone or default. phone mode is best for audio that is generated from a phone call (typically recorded at 8khz sampling rate).
default mode works best for audio generated from video or online meetings (typically recorded at 16khz or more sampling rate).

If not included, the default value is default.
'mode': 'phone'

Example requests

This section provides three Node.js example requests:

Submit video file with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting a video file. In this example, because the request submits a video file, the request parameters are delivered as query parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set up your test environment.

import fetch from 'node-fetch';
import fs from 'fs';
import qs from 'qs';

const accessToken = '<ACCESS_TOKEN>';
const filePath = 'BusinessMeeting.mp4';
const symblaiParams = {
  'name': 'Business Meeting',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'customVocabulary': [
    'marketing director',
    'meeting',
    'customer'
  ],
  'detectEntities': true,
  'entities': [
    {
      'customType': 'executives',
      'text': 'marketing director'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSpeakerDiarization': true,
  'diarizationSpeakerCount': 3,
  'enableSummary': true,
  'languageCode': 'en-US',
  'mode': 'default'
}

const fetchResponse = await fetch(`https://api.symbl.ai/v1/process/video?${qs.stringify(symblaiParams)}`, {
  method: 'post',
  body: fs.createReadStream(filePath),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'video/mp4',
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Submit audio URL with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting an audio URL. In this example, because the request submits an audio URL, the request parameters are delivered as body parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set up your test environment.

import fetch from 'node-fetch';

const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
  'name': 'Business Meeting',
  'url': 'https://symbltestdata.s3.us-east-2.amazonaws.com/newPhonecall.mp3',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'customVocabulary': [
    'marketing director',
    'meeting',
    'customer'
  ],
  'detectEntities': true,
  'entities': [
    {
      'customType': 'contacts',
      'text': 'technician'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSpeakerDiarization': true,
  'diarizationSpeakerCount': 2,
  'enableSummary': true,
  'languageCode': 'en-US',
  'mode': 'phone'
}

const fetchResponse = await fetch('https://api.symbl.ai/v1/process/audio/url', {
  method: 'post',
  body: JSON.stringify(symblaiParams),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'application/json'
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Submit text with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting text. In this example, because the request submits text, the request parameters are delivered as body parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set up your test environment.

import fetch from 'node-fetch';

const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
  'name': 'Business Meeting',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'detectEntities': true,
  'entities': [
    {
      'customType': 'contacts',
      'text': 'technician'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSummary': true,
  'messages': [
    {
      "duration": {
        "startTime": "2020-07-21T16:04:19.99Z",
        "endTime": "2020-07-21T16:04:20.99Z"
      },
      "payload": {
        "content": "Hello. I installed your internet service for my new home on July 1 and it's really slow. Is there anything I can do to improve the performance? Thanks for your help."
      },
      "from": {
        "name": "Customer",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:21.99Z",
        "endTime": "2020-07-21T16:04:23.99Z"
      },
      "payload": {
        "content": "I'm sorry to hear about the inconvenience. I can send an internet service technician to your home whenever you're free. Please respond to this message with your availability."
      },
      "from": {
        "name": "Agent",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:24.99Z",
        "endTime": "2020-07-21T16:04:26.99Z"
      },
      "payload": {
        "content": "I will finish my work by 5 PM so you could send a technician after that. Does that time work okay?"
      },
      "from": {
        "name": "Customer",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:27.99Z",
        "endTime": "2020-07-21T16:04:29.99Z"
      },
      "payload": {
        "content": "I'll follow up with the technician. I will call you in an hour to confirm your appointment. Thank you."
      },
      "from": {
        "name": "Agent",
        "userId": "[email protected]"
      }
    }
  ]
}

const fetchResponse = await fetch('https://api.symbl.ai/v1/process/text?enableSummary=true', {
  method: 'post',
  body: JSON.stringify(symblaiParams),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'application/json'
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Response

The Async API returns a common response for all submit and append requests. The following table describes the fields in the response.

FieldDescription
conversationIdThe unique identifier of a conversation that is submitted to the Async API. The conversation ID is critical for generating Conversation Intelligence.
jobIdThe unique identifier of the processing job. The job ID can be used to get the Job status.

Example response

The following is an example of the common response for submit and append requests to the Async API.

{
  conversationId: '5784375198220288',
  jobId: 'cf4a68fe-225a-4946-9819-d961d7a31058'
}