Skip to main content


The Async Audio API allows you to process an audio file.

It can be utilized for any use case where you have access to recorded audio and want to extract insights and other conversational attributes supported by Symbl's Conversation API.

Use this API to upload your file and generate a Conversation ID. If you want to append additional audio information to the same Conversation ID.

API Endpoint#


Example API Call#

The sample request accepts just the raw audio file from the data with the MIME typeset in the Content-Type Header. The audio file should only have Mono Channel.

Before using the Async Audio API you must get the authentication token (AUTH_TOKEN) from our authentication process.

curl --location --request POST "" \
--header 'Content-Type: audio/wav' \
--header "Authorization: Bearer $AUTH_TOKEN" \
--data-binary '@/file/location/audio.wav'
# MP3 File
curl --location --request POST "" \
--header 'Content-Type: audio/mpeg' \
--header "Authorization: Bearer $AUTH_TOKEN" \
--data-binary '@/file/location/audio.mp3'

Request Headers#

Header NameRequiredValue
AuthorizationMandatoryBearer <token> The token you get from our authentication process.
Content-LengthMandatoryThis should correctly indicate the length of the request body in bytes.
Content-TypeOptionalThis is OPTIONAL field which describes the format and codec of the provided audio data. Accepted values are audio/wav, audio/mpeg, audio/mp3 and audio/wave only. If your audio is in a format other than these, do not use this field.
x-api-keyOptionalDEPRECATED. The JWT token you get from our authentication process.

Query Parameters#

nameOptionalStringYour meeting name. Default name set to conversationId.
webhookUrlOptionalStringWebhook URL on which job updates to be sent. This should be after the API call is made. For Webhook payload, refer to the Using Webhook section below.
customVocabularyOptionalString[]Contains a list of words and phrases that provide hints to the speech recognition task.
confidenceThresholdOptionalDoubleMinimum confidence score that you can set for an API to consider it as a valid insight (action items, follow-ups, topics, and questions). It should be in the range <=0.5 to <=1.0 (i.e., greater than or equal to 0.5 and less than or equal to 1.0.). The default value is 0.5.
entitiesOptionalObject[]Input custom entities which can be detected in your conversation using Entities API. See sample request under Custom Entity section.
detectEntitiesOptionalBooleanDefault value is false. If not set the Entities API will not return any entities from the conversation.
detectPhrasesOptionalBooleanAccepted values are true & false. It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API.
enableSeparateRecognitionPerChannelOptionalbooleanEnables Speaker Separated Channel audio processing. Accepts true or false.
channelMetadataOptionalObject[]This object parameter contains two variables speaker and channel to specific which speaker corresponds to which channel. This object only works when enableSeparateRecognitionPerChannel query param is set to true. Learn more in the Channel Metadata section below.
languageCodeOptionalStringWe accept different languages. Please check language Code as per your requirement.
modeOptionalStringAccepts phone or mode is best for audio that is generated from phone call(which is typically recorded at 8khz sampling rate).
default mode works best for audio generated from video or online meetings(which is typically recorded at 16khz or more sampling rate).
When you don't pass this parameter default is selected automatically.
trackers BETAOptionalListA tracker entity containing name and vocabulary (a list of key words and/or phrases to be tracked). Read more in the Tracker API section.
enableAllTrackers BETA OptionalBooleanDefault value is false. Setting this parameter to true will enable detection of all the Trackers maintained for your account by the Management API.This will allow Symbl to detect all the available Trackers in a specific Conversation. Learn about this parameter here.
enableSummary LABS OptionalBooleanSetting this parameter to true allows you to generate Summaries using Summary API (Labs). Ensure that you use as the base URL.
enableSpeakerDiarizationOptionalBooleanWhether the diarization should be enabled for this conversation. Pass this as true to enable Speaker Separation. To learn more, refer to the Speaker Separation section below.
diarizationSpeakerCountOptionalStringThe number of unique speakers in this conversation. To learn more, refer to the Speaker Separation section below.

Custom Entity#

"detectEntities": true,
"entities": [
"customType": "identify_people",
"text": "executives"
"customType": "identify_colour",
"text": "blue"

Channel Metadata#

The channelMetadata object has the members channel and speaker as shown below:

Given below is an example of a channelMetadata object:

"channelMetadata": [
"channel": 1,
"speaker": {
"name": "Robert Bartheon",
"email": ""
"channel": 2,
"speaker": {
"name": "Arya Stark",
"email": ""

channelMetadata object has following members:

channelYesIntegerThis denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.
speakerYesStringThis is the wrapper object which defines the speaker for this channel.

speaker has the following members:

nameNoStringName of the speaker.
emailNoStringEmail address of the speaker.


"conversationId": "5815170693595136",
"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"
conversationIdID to be used with Conversation API.
jobIdID to be used with Job API.

Speaker Separation#

The Async Audio & Async Video APIs can detect and separate unique speakers in a single stream of audio & video without the need for separate speaker events.

To enable this capability with either of the APIs the enableSpeakerDiarization and diarizationSpeakerCount query parameters need to be passed with the request. The diarizationSpeakerCount should be equal to the number of unique speakers in the conversation. If the number varies then this might introduce false positives in the diarized results.

πŸ‘‰ To learn how to implement Speaker Separation, see How to implement Speaker Separation page.

If you’re looking for similar capability in Real-Time APIs, please refer to Active Speaker Events and Speaker Separation in WebSocket API sections.


Speaker Diarization Language Support

Currently, Speaker Diarization is available for English and Spanish languages only.

Billing for Speaker Separated Channels

The billing for a speaker separated channel audio file is according to the number of channels present in the audio files. The duration for billing will be calculated according to the below formula:

totalDuration = duration_of_the_audio_file * total_number_of_channels

So, if you send a 120-second file with 3 speaker separated channels, the total duration for billing would be 360 seconds or 6 minutes.

Using Webhook#

The webhookUrl will be used to send the status of job created for uploaded audio URL. Every time the status of the job changes it will be notified on the Webhook Url.

Code Example#
"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d",
"status": "in_progress"
jobIdID to be used with Job API.
statusCurrent status of the job. (Valid statuses: [ scheduled, in_progress, completed, failed ])

API Limit Error#

"statusCode" : 429,
"message" : "This API has a limit of maximum of `X` number of concurrent jobs per account. If you are looking to scale, and need more concurrent jobs than this limit, please contact us at"

Here value of X can be found in FAQ.


You must wait for the job to complete processing before you proceed with getting the Conversation Intelligence. If you immediately make a GET request to Conversation API, it is possible that you'll receive incomplete insights. Therefore, ensure that you wait for the job to complete.