Async Feature Reference

When you process a conversation with the Async API, you can also implement a subset of Symbl.ai's conversation intelligence features. This guide briefly describes the available functionality and provides a basic examples of invoking the features.

The following table gives a brief explanation of the different conversation intelligence features that can be used when you process a conversation.

Feature

Description

Actionable phrases

Identify phrases in conversations that may contain actionable information. If any are found, these phrases are included when you get a Speech-to-Text transcript.

Custom vocabulary

For audio and video sources, custom vocabulary gives hints about known words that occur in the conversation to improve speech recognition.

Entity detection

Identify entities such as dates, times, numbers, and more in your conversations.

For more information about entities, see Entities.

Speaker separation

Intelligently identify different speakers in a conversation.

For more information about speaker separation, see Apply Speaker Separation to Async Files.

Transcription for different languages

Enable speech-to-text transcription for any of the supported languages.

For more information, see Supported Languages.

Track custom phrases (Beta)

Implement trackers that intelligently identify phrases based on vocabulary that you provide.

For more information about trackers, see Trackers (Beta).

Conversation summary (Beta)

Create an AI-generated summary of the conversation.

For more information about summarization, see Summary (Beta).


Request parameters

For the Async API, you use the conversation intelligence features by creating parameters for your request. The parameters are the same for each request, though the method of delivery changes if you are submitting files directly:

  • When you submit video and audio files, you specify the values as query parameters.
  • When you submit URLs and text, you specify the values as body parameters.

The following table describes the parameters you can include with in a request.

Parameter

Type

Description

Example

webhookUrl

String, optional

The webhook URL for your application. When the status of the processing job is updated, the Async API sends an HTTP request to the URL that you specify.

'webhookUrl': ' https://webhook.site/12acab7f-2b5d-1490-89b5-bccb9c59a0d3'

confidenceThreshold

Double, optional

Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to 0.5 and less than or equal to 1.0.).

If not included, the default value is 0.5.

'confidenceThreshold': 0.6

customVocabulary

List, optional

Contains a list of words and phrases that provide hints to the speech recognition task.

'customVocabulary': [
  'marketing director',
  'meeting',
  'customer'
]

detectPhrases

Boolean, optional

Shows Actionable Phrases in each sentence of conversation. These phrases are included when you get a Speech-to-Text transcript.

If not included, the default value is false.

'detectPhrases': true

detectEntities

Boolean, optional

Enables the detection of entities in your conversation.

For more information, see Entities.

If not included, the default value is false.

'detectEntities': true

entities

List, optional

Adds custom entities that can be detected in your conversation.

Requires that detectEntities is true.

'entities': [
  {
    'customType': 'executives',
    'text': 'marketing director'
  }
]

enableAllTrackers (Beta)

Boolean, optional

Enables detection of all trackers that you have created.

For more information, see Trackers (Beta).

If not included, the default value is false.

'enableAllTrackers': true

trackers (Beta)

List, optional

Adds custom trackers that can be detected only in that conversation.

Requires that enableAllTrackers is true.

'trackers': [
  {
    'name': 'gratitude',
    'vocabulary': [
      'thanks',
      'thank you',
      'thank you for your time',
      'i appreciate it',
      'we apppreciate it'
    ]
  }
]

enableSpeakerDiarization

Boolean, optional

Enables intelligent speaker separation. If enabled, you must also include speakerDiarizationCount.

If not included, the default value is false.

'enableSpeakerDiarization': true

diarizationSpeakerCount

Integer, required if enableSpeakerDiarization is true

Sets the expected number of speakers for speaker separation.

'diarizationSpeakerCount': 2

enableSummary (Beta)

Boolean, optional

Generates a summary for the conversation as it is processed.

If not included, the default value is false.

'enableSummary': true

languageCode

String, optional

Specifies the language to use for Speech-to-Text transcripts.

For more information, see Supported Languages.

If not included, the default value is en-US.

'languageCode': 'es-ES'

mode

String, optional

Valid values are phone or default. phone mode is best for audio that is generated from phone call (typically recorded at 8khz sampling rate).
default mode works best for audio generated from video or online meetings (typically recorded at 16khz or more sampling rate).

If not included, the default value is default.

'mode': 'phone'

Example requests

This section provides three Node.js example requests:

Submit video file with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting a video file. In this example, because the request submits a video file, the request parameters are delivered as query parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set Up Your Test Environment.

import fetch from 'node-fetch';
import fs from 'fs';
import qs from 'qs';

const accessToken = '<ACCESS_TOKEN>';
const filePath = 'BusinessMeeting.mp4';
const symblaiParams = {
  'name': 'Business Meeting',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'customVocabulary': [
    'marketing director',
    'meeting',
    'customer'
  ],
  'detectEntities': true,
  'entities': [
    {
      'customType': 'executives',
      'text': 'marketing director'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSpeakerDiarization': true,
  'diarizationSpeakerCount': 3,
  'enableSummary': true,
  'languageCode': 'en-US',
  'mode': 'default'
}

const fetchResponse = await fetch(`https://api.symbl.ai/v1/process/video?${qs.stringify(symblaiParams)}`, {
  method: 'post',
  body: fs.createReadStream(filePath),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'video/mp4',
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Submit audio URL with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting an audio URL. In this example, because the request submits an audio URL, the request parameters are delivered as body parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set Up Your Test Environment.

import fetch from 'node-fetch';

const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
  'name': 'Business Meeting',
  'url': 'https://symbltestdata.s3.us-east-2.amazonaws.com/newPhonecall.mp3',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'customVocabulary': [
    'marketing director',
    'meeting',
    'customer'
  ],
  'detectEntities': true,
  'entities': [
    {
      'customType': 'contacts',
      'text': 'technician'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSpeakerDiarization': true,
  'diarizationSpeakerCount': 2,
  'enableSummary': true,
  'languageCode': 'en-US',
  'mode': 'phone'
}

const fetchResponse = await fetch('https://api.symbl.ai/v1/process/audio/url', {
  method: 'post',
  body: JSON.stringify(symblaiParams),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'application/json'
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Submit text with parameters

The following Node.js code sample enables the Async conversation intelligence features when submitting text. In this example, because the request submits text, the request parameters are delivered as body parameters.

👍

Try our interactive examples!

We provide interactive versions of these code samples: Node.js

To get started with our code samples, see Set Up Your Test Environment.

import fetch from 'node-fetch';

const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
  'name': 'Business Meeting',
  'webhookUrl': '<WEBHOOK>',
  'confidenceThreshold': 0.7,
  'detectEntities': true,
  'entities': [
    {
      'customType': 'contacts',
      'text': 'technician'
    }
  ],
  'detectPhrases': true,
  'enableAllTrackers': true,
  'trackers': [
    {
      'name': 'gratitude',
      'vocabulary': [
        'thanks',
        'thank you',
        'thank you for your time',
        'i appreciate it',
        'we apppreciate it'
      ]
    }
  ],
  'enableSummary': true,
  'messages': [
    {
      "duration": {
        "startTime": "2020-07-21T16:04:19.99Z",
        "endTime": "2020-07-21T16:04:20.99Z"
      },
      "payload": {
        "content": "Hello. I installed your internet service for my new home on July 1 and it's really slow. Is there anything I can do to improve the performance? Thanks for your help."
      },
      "from": {
        "name": "Customer",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:21.99Z",
        "endTime": "2020-07-21T16:04:23.99Z"
      },
      "payload": {
        "content": "I'm sorry to hear about the inconvenience. I can send an internet service technician to your home whenever you're free. Please respond to this message with your availability."
      },
      "from": {
        "name": "Agent",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:24.99Z",
        "endTime": "2020-07-21T16:04:26.99Z"
      },
      "payload": {
        "content": "I will finish my work by 5 PM so you could send a technician after that. Does that time work okay?"
      },
      "from": {
        "name": "Customer",
        "userId": "[email protected]"
      }
    },
    {
      "duration": {
        "startTime": "2020-07-21T16:04:27.99Z",
        "endTime": "2020-07-21T16:04:29.99Z"
      },
      "payload": {
        "content": "I'll follow up with the technician. I will call you in an hour to confirm your appointment. Thank you."
      },
      "from": {
        "name": "Agent",
        "userId": "[email protected]"
      }
    }
  ]
}

const fetchResponse = await fetch('https://api.symbl.ai/v1/process/text?enableSummary=true', {
  method: 'post',
  body: JSON.stringify(symblaiParams),
  headers: {
    'Authorization': `Bearer ${accessToken}`,
    'Content-Type': 'application/json'
  }
});

const responseBody = await fetchResponse.json();

console.log(responseBody);

Response

The Async API returns a common response for all submit and append requests. The following table describes the fields in the response.

Field

Description

conversationId

The unique identifier of a conversation that is submitted to the Async API. The conversation ID is critical for generating Conversation Intelligence.

jobId

The unique identifier of the processing job. The job ID can be used to get the status of the job.

Example response

The following is an example of the common response for submit and append requests to the Async API.

{
  conversationId: '5784375198220288',
  jobId: 'cf4a68fe-225a-4946-9819-d961d7a31058'
}

Did this page help you?