Async feature reference
When you process a conversation with the Async API, you can also implement a subset of Symbl.ai's conversation intelligence features. This guide briefly describes the available functionality and provides a basic examples of invoking the features.
The following table gives a brief explanation of the different conversation intelligence features that can be used when you process a conversation.
Feature | Description |
---|---|
Actionable phrases | Identify phrases in conversations that may contain actionable information. If any are found, these phrases are included when you get a Speech-to-text transcript. |
Custom vocabulary | For audio and video sources, custom vocabulary gives hints about known words that occur in the conversation to improve speech recognition. |
Entity detection | Identify entities such as PII, PCI, and PHI data, dates, times, numbers, and more in your conversations. For more information about entities, see Entity detection. |
Speaker separation | Intelligently identify different speakers in a conversation. For more information, see Apply speaker separation to audio and video recordings. |
Transcription for different languages | Enable speech-to-text transcription for any of the supported languages. For more information, see Supported languages. |
Track custom phrases | Implement trackers that intelligently identify phrases based on vocabulary that you provide. For more information about trackers, see Trackers. |
Conversation summary | Create an AI-generated summary of the conversation. For more information about summarization, see Summary. |
Call Score | Generate call scores with qualitative feedback for a conversation to evaluate agent performance and call quality at scale. For more information about call score, see Call Score. |
Request parameters
For the Async API, you use the conversation intelligence features by creating parameters for your request. The parameters are the same for each request, though the method of delivery changes if you are submitting files directly:
- When you submit video and audio files, you specify the values as query parameters.
- When you submit URLs and text, you specify the values as body parameters.
The following table describes the parameters you can include with in a request.
Parameter | Type | Description | Example |
---|---|---|---|
webhookUrl | String, optional | The webhook URL for your application. When the status of the processing job is updated, the Async API sends an HTTP request to the URL that you specify. | 'webhookUrl': ' <https://webhook.site/12acab7f-2b5d-1490-89b5-bccb9c59a0d3>' |
confidenceThreshold | Double, optional | Minimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to 0.5 and less than or equal to 1.0 .).If not included, the default value is 0.5 . | 'confidenceThreshold': 0.6 |
customVocabulary | List, optional | Contains a list of words and phrases that provide hints to the speech recognition task. | 'customVocabulary': [ 'marketing director', 'meeting', 'customer' ] |
detectPhrases | Boolean, optional | Shows Actionable Phrases in each sentence of conversation. These phrases are included when you get a Speech-to-text transcript. If not included, the default value is false . | 'detectPhrases': true |
detectEntities | Boolean, optional | Enables the detection of entities in your conversation. For more information, see Entity detection. If not included, the default value is true . | 'detectEntities': true |
enableAllTrackers | Boolean, optional | Enables detection of all active trackers in your Managed Trackers Library. For more information, see Trackers. Enabled by default. To disable, change the value to false . | 'enableAllTrackers': true |
trackers | List, optional | Adds custom trackers that can be detected only in that conversation. Requires that enableAllTrackers is true . | 'trackers': [ { 'name': 'gratitude', 'vocabulary': [ 'thanks', 'thank you', 'thank you for your time', 'i appreciate it', 'we apppreciate it' ] } ] |
enableSpeakerDiarization | Boolean, optional | Enables intelligent speaker separation. If enabled, you must also include speakerDiarizationCount .If not included, the default value is false . | 'enableSpeakerDiarization': true |
diarizationSpeakerCount | Integer, required if enableSpeakerDiarization is true | Sets the expected number of speakers for speaker separation. | 'diarizationSpeakerCount': 2 |
enableSummary | Boolean, optional | Generates a summary for the conversation as it is processed. If not included, the default value is false . | 'enableSummary': true |
languageCode | String, optional | Specifies the language to use for Speech-to-text transcripts. For more information, see Supported languages. If not included, the default value is en-US . | 'languageCode': 'es-ES' |
mode | String, optional | Valid values are phone or default . phone mode is best for audio that is generated from a phone call (typically recorded at 8khz sampling rate).default mode works best for audio generated from video or online meetings (typically recorded at 16khz or more sampling rate).If not included, the default value is default . | 'mode': 'phone' |
metadata | Object, optional | Add salesStage and prospectName to the metadata to specify the stage of the conversation that you are processing for call score and insights UI. | 'metadata': { 'salesStage': 'general', 'prospectName': 'ABC Inc.' } |
conversationType | String, optional | Add a conversationType to specify which type of conversation you are processing for call score. | 'conversationType': 'general' |
features | Object, optional | Enable callScore and insights features to generate the call score. | features": { "featureList": [ "callScore", "insights" ] } |
callScore | Object, optional | Include scorecardID to indicate which scorecard is to be used to evaluate the call. | callScore: {scorecardId: "5733078868164608"} |
callScoreWebhookUrl | String, optional | When the status of the call score processing job is updated, the call score API sends an HTTP request with the status to the URL that you specify. | 'callScoreWebhookUrl': ' <https://webhook.site/12acab7f-2b5d-1490-89b5-bccb9c59a0d3>' |
Example requests
This section provides three Node.js example requests:
Submit video file with parameters
The following Node.js code sample enables the Async conversation intelligence features when submitting a video file. In this example, because the request submits a video file, the request parameters are delivered as query parameters.
Try our interactive examples!
We provide interactive versions of these code samples: Node.js
To get started with our code samples, see Set up your test environment.
import fetch from 'node-fetch';
import fs from 'fs';
import qs from 'qs';
const accessToken = '<ACCESS_TOKEN>';
const filePath = 'BusinessMeeting.mp4';
const symblaiParams = {
'name': 'Business Meeting',
'webhookUrl': '<WEBHOOK>',
'confidenceThreshold': 0.7,
'customVocabulary': [
'marketing director',
'meeting',
'customer'
],
'detectEntities': true,
'entities': [
{
'customType': 'executives',
'text': 'marketing director'
}
],
'detectPhrases': true,
'enableAllTrackers': true,
'trackers': [
{
'name': 'gratitude',
'vocabulary': [
'thanks',
'thank you',
'thank you for your time',
'i appreciate it',
'we apppreciate it'
]
}
],
'enableSpeakerDiarization': true,
'diarizationSpeakerCount': 3,
'enableSummary': true,
'languageCode': 'en-US',
'mode': 'default',
'callScoreWebhookUrl': 'https://webhook.site/dc3907b1-429a-466a-97e2-2eae00b752da',
'features': {
'featureList': [
'insights',
"callScore"
]
},
'conversationType': 'general',
'metadata': {
'salesStage': 'general',
'prospectName': 'ABC Inc'
}
}
const fetchResponse = await fetch(`https://api.symbl.ai/v1/process/video?${qs.stringify(symblaiParams)}`, {
method: 'post',
body: fs.createReadStream(filePath),
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'video/mp4',
}
});
const responseBody = await fetchResponse.json();
console.log(responseBody);
Submit audio URL with parameters
The following Node.js code sample enables the Async conversation intelligence features when submitting an audio URL. In this example, because the request submits an audio URL, the request parameters are delivered as body parameters.
Try our interactive examples!
We provide interactive versions of these code samples: Node.js
To get started with our code samples, see Set up your test environment.
import fetch from 'node-fetch';
const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
'name': 'Business Meeting',
'url': 'https://symbltestdata.s3.us-east-2.amazonaws.com/newPhonecall.mp3',
'webhookUrl': '<WEBHOOK>',
'confidenceThreshold': 0.7,
'customVocabulary': [
'marketing director',
'meeting',
'customer'
],
'detectEntities': true,
'entities': [
{
'customType': 'contacts',
'text': 'technician'
}
],
'detectPhrases': true,
'enableAllTrackers': true,
'trackers': [
{
'name': 'gratitude',
'vocabulary': [
'thanks',
'thank you',
'thank you for your time',
'i appreciate it',
'we apppreciate it'
]
}
],
'enableSpeakerDiarization': true,
'diarizationSpeakerCount': 2,
'enableSummary': true,
'languageCode': 'en-US',
'mode': 'phone',
'callScoreWebhookUrl': 'https://webhook.site/dc3907b1-429a-466a-97e2-2eae00b752da',
'features': {
'featureList': [
'insights',
"callScore"
]
},
'conversationType': 'general',
'metadata': {
'salesStage': 'general',
'prospectName': 'ABC Inc'
}
}
const fetchResponse = await fetch('https://api.symbl.ai/v1/process/audio/url', {
method: 'post',
body: JSON.stringify(symblaiParams),
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json'
}
});
const responseBody = await fetchResponse.json();
console.log(responseBody);
Submit text with parameters
The following Node.js code sample enables the Async conversation intelligence features when submitting text. In this example, because the request submits text, the request parameters are delivered as body parameters.
Try our interactive examples!
We provide interactive versions of these code samples: Node.js
To get started with our code samples, see Set up your test environment.
import fetch from 'node-fetch';
const accessToken = '<ACCESS_TOKEN>';
const symblaiParams = {
'name': 'Business Meeting',
'webhookUrl': '<WEBHOOK>',
'confidenceThreshold': 0.7,
'detectEntities': true,
'entities': [
{
'customType': 'contacts',
'text': 'technician'
}
],
'detectPhrases': true,
'enableAllTrackers': true,
'trackers': [
{
'name': 'gratitude',
'vocabulary': [
'thanks',
'thank you',
'thank you for your time',
'i appreciate it',
'we apppreciate it'
]
},
'callScoreWebhookUrl': 'https://webhook.site/dc3907b1-429a-466a-97e2-2eae00b752da',
'features': {
'featureList': [
'insights',
"callScore"
]
},
'conversationType': 'general',
'metadata': {
'salesStage': 'general',
'prospectName': 'ABC Inc'
}
],
'enableSummary': true,
'messages': [
{
"duration": {
"startTime": "2020-07-21T16:04:19.99Z",
"endTime": "2020-07-21T16:04:20.99Z"
},
"payload": {
"content": "Hello. I installed your internet service for my new home on July 1 and it's really slow. Is there anything I can do to improve the performance? Thanks for your help."
},
"from": {
"name": "Customer",
"userId": "[email protected]"
}
},
{
"duration": {
"startTime": "2020-07-21T16:04:21.99Z",
"endTime": "2020-07-21T16:04:23.99Z"
},
"payload": {
"content": "I'm sorry to hear about the inconvenience. I can send an internet service technician to your home whenever you're free. Please respond to this message with your availability."
},
"from": {
"name": "Agent",
"userId": "[email protected]"
}
},
{
"duration": {
"startTime": "2020-07-21T16:04:24.99Z",
"endTime": "2020-07-21T16:04:26.99Z"
},
"payload": {
"content": "I will finish my work by 5 PM so you could send a technician after that. Does that time work okay?"
},
"from": {
"name": "Customer",
"userId": "[email protected]"
}
},
{
"duration": {
"startTime": "2020-07-21T16:04:27.99Z",
"endTime": "2020-07-21T16:04:29.99Z"
},
"payload": {
"content": "I'll follow up with the technician. I will call you in an hour to confirm your appointment. Thank you."
},
"from": {
"name": "Agent",
"userId": "[email protected]"
}
}
]
}
const fetchResponse = await fetch('https://api.symbl.ai/v1/process/text?enableSummary=true', {
method: 'post',
body: JSON.stringify(symblaiParams),
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json'
}
});
const responseBody = await fetchResponse.json();
console.log(responseBody);
Response
The Async API returns a common response for all submit and append requests. The following table describes the fields in the response.
Field | Description |
---|---|
conversationId | The unique identifier of a conversation that is submitted to the Async API. The conversation ID is critical for generating Conversation Intelligence. |
jobId | The unique identifier of the processing job. The job ID can be used to get the Job status. |
Example response
The following is an example of the common response for submit and append requests to the Async API.
{
conversationId: '5784375198220288',
jobId: 'cf4a68fe-225a-4946-9819-d961d7a31058'
}
Updated 5 months ago