PUT Audio URL API

The Async Audio URL API allows you to append an additional audio url to the previous conversation, append the transcription and get conversational insights for updated conversation.

It can be useful in any use case where you have access to multiple recorded audio stored publicly as URL of any type of conversation, and you want to extract the insightful items supported by the Conversation API.

info

If there are multiple requests are submitted for the same conversationId, all the requests will be processed synchronously in order to maintain the order of the requests for the conversation.

HTTP REQUEST#

PUT https://api.symbl.ai/v1/process/audio/url/:conversationId

Example API call#

curl --location --request PUT 'https://api.symbl.ai/v1/process/audio/url/:conversationId?webhookUrl=<your_webhook_url>' \
--header 'Content-Type: application/json' \
--header 'x-api-key: <generated_valid_token>' \
--data-raw '{
"url": "https://symbltestdata.s3.us-east-2.amazonaws.com/sample_audio_file.wav",
"confidenceThreshold": 0.6,
"timezoneOffset": 0
}'

Request Headers#

Header NameRequiredValue
x-api-keyYesyour_auth_token
Content-TypeYesAccepted values is application/json

Path Params#

Parametervalue
conversationIdconversationId which is provided by the first request submitted using POST async audio API

Request Body#

ParametersRequiredDescription
urlYesA valid url string. The URL must be a publicly accessible url.
customVocabularyNoContains a list of words and phrases that provide hints to the speech recognition task.
confidenceThresholdNoMinimum required confidence for the insight to be recognized. Value range from 0.0 to 1.0. Default value is 0.5 .
detectPhrasesNoIt shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. It's a boolean value where the default value is false.
nameNoYour meeting name. Default name set to conversationId.
webhookUrlNoWebhook url on which job updates to be sent. This should be post API.
entitiesNoInput custom entities which can be detected in your conversation using Entities' API. For example, check the sample code on right.
languageCodeNoWe accept different languages. Please check language Code as per your requirement.
enableSeparateRecognitionPerChannelNoEnables Speaker Separated Channel audio processing. Accepts true or false.
channelMetadataNoThis object parameter contains two variables speaker and channel to specific which speaker corresponds to which channel. This object only works when enableSeparateRecognitionPerChannel is set to true.

channelMetadata Object#

{
"channelMetadata": [
{
"channel": 1,
"speaker": {
"name": "Robert Bartheon",
"email": "robertbartheon@gmail.com"
}
},
{
"channel": 2,
"speaker": {
"name": "Arya Stark",
"email": "aryastark@gmail.com"
}
}
]
}

channelMetadata object has following members:

FieldDescription
channelThis denotes the channel number in the audio file. Each channel will contain independent speaker's voice data.
speakerThis is the wrapper object which defines the speaker for this channel.

speaker has the following members:

FieldDescription
nameName of the speaker.
emailEmail address of the speaker.
caution

Billing for a speaker separated channel audio file happens according to the number of channels present in the audio files. The duration for billing will be calculated according to the below formula:

totalDuration = duration_of_the_audio_file * total_number_of_channels

So if you send a 120-second file with 3 speaker separated channels, the total duration for billing would be 360 seconds or 6 minutes.

Webhook Payload#

webhookUrl will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl.

FieldDescription
jobIdID to be used with Job API
statusCurrent status of the job. (Valid statuses - [ scheduled, in_progress, completed, failed ])

Response#

{
"conversationId": "5815170693595136",
"jobId": "9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d"
}

Response Object#

FieldDescription
conversationIdID to be used with Conversation API.
jobIdID to be used with Jobs API.