PUT Audio URL API
The Async Audio URL API allows you to append an additional audio url to the previous conversation, append the transcription and get conversational insights for updated conversation.
It can be useful in any use case where you have access to multiple recorded audio stored publicly as URL of any type of conversation, and you want to extract the insightful items supported by the Conversation API.
info
If there are multiple requests are submitted for the same conversationId, all the requests will be processed synchronously in order to maintain the order of the requests for the conversation.
#
HTTP REQUESTPUT https://api.symbl.ai/v1/process/audio/url/:conversationId
#
Example API call- cURL
- Javascript
#
Request HeadersHeader Name | Required | Value |
---|---|---|
x-api-key | Yes | your_auth_token |
Content-Type | Yes | Accepted values is application/json |
#
Path ParamsParameter | value |
---|---|
conversationId | conversationId which is provided by the first request submitted using POST async audio API |
#
Request BodyParameters | Required | Description |
---|---|---|
url | Yes | A valid url string. The URL must be a publicly accessible url. |
customVocabulary | No | Contains a list of words and phrases that provide hints to the speech recognition task. |
confidenceThreshold | No | Minimum required confidence for the insight to be recognized. Value range from 0.0 to 1.0. Default value is 0.5 . |
detectPhrases | No | It shows Actionable Phrases in each sentence of conversation. These sentences can be found using the Conversation's Messages API. It's a boolean value where the default value is false . |
name | No | Your meeting name. Default name set to conversationId . |
webhookUrl | No | Webhook url on which job updates to be sent. This should be post API. |
entities | No | Input custom entities which can be detected in your conversation using Entities' API. For example, check the sample code on right. |
languageCode | No | We accept different languages. Please check language Code as per your requirement. |
enableSeparateRecognitionPerChannel | No | Enables Speaker Separated Channel audio processing. Accepts true or false . |
channelMetadata | No | This object parameter contains two variables speaker and channel to specific which speaker corresponds to which channel. This object only works when enableSeparateRecognitionPerChannel is set to true . |
#
channelMetadata ObjectchannelMetadata
object has following members:
Field | Description |
---|---|
channel | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. |
speaker | This is the wrapper object which defines the speaker for this channel. |
speaker
has the following members:
Field | Description |
---|---|
name | Name of the speaker. |
email | Email address of the speaker. |
caution
Billing for a speaker separated channel audio file happens according to the number of channels present in the audio files. The duration for billing will be calculated according to the below formula:
totalDuration = duration_of_the_audio_file * total_number_of_channels
So if you send a 120-second file with 3 speaker separated channels, the total duration for billing would be 360 seconds or 6 minutes.
#
Webhook PayloadwebhookUrl
will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl.
Field | Description |
---|---|
jobId | ID to be used with Job API |
status | Current status of the job. (Valid statuses - [ scheduled , in_progress , completed , failed ]) |
#
Response#
Response ObjectField | Description |
---|---|
conversationId | ID to be used with Conversation API. |
jobId | ID to be used with Jobs API. |