PUT Audio API
The Async Audio API allows you to process an additional audio file to the previous conversation, append the transcription and get conversational insights for updated conversation.
It can be useful in any use case where you have access to multiple audio files of any type of conversation, and you want to extract the insightful items supported by the Conversation API.
#
HTTP REQUESTPUT https://api.symbl.ai/v1/process/audio/:conversationId
#
Example API call- cURL
- Javascript
#
Request Headersinfo
Content-Type: This field is optional.
If you're not sure about audio format, you can omit it since the API will automatically detect the content type. But when it's mentioned, audio format is validated.
Header Name | Required | Value |
---|---|---|
x-api-key | Yes | your_auth_token |
Content-Type | No | Describes the format and codec of the provided audio data. Accepted values are audio/wav , audio/mpeg , audio/mp3 and audio/wave . |
#
Path ParamsParameter | value |
---|---|
conversationId | conversationId which is provided by the first request submitted using POST async audio API |
#
Query ParamsParameters | Required | Description |
---|---|---|
name | No | Your meeting name. Default name set to conversationId . |
webhookUrl | No | Webhook url on which job updates to be sent. This should be post API. |
customVocabulary | No | Contains a list of words and phrases that provide hints to the speech recognition task. |
detectPhrases | No | Accepted values are true & false . It shows Actionable Phrases in each sentence of conversation. These sentences can be found in the Conversation's Messages API. |
entities | No | Input custom entities which can be detected in your conversation using Entities' API. For example, check the sample code on right. |
enableSeparateRecognitionPerChannel | No | Enables Speaker Separated Channel audio processing. Accepts true or false . |
channelMetadata | No | This object parameter contains two variables speaker and channel to specific which speaker corresponds to which channel. This object only works when enableSeparateRecognitionPerChannel query param is set to true . |
languageCode | No | We accept different languages. Please check language Code as per your requirement. |
#
Webhook PayloadwebhookUrl
will be used to send the status of job created for uploaded audio. Every time the status of the job changes it will be notified on the WebhookUrl.
Parameter | value |
---|---|
jobId | ID to be used with Job API. |
status | Current status of the job. (Valid statuses - [ scheduled, in_progress, completed, failed ]) |
#
channelMetadata Object channelMetadata
object has following members:
Field | Description |
---|---|
channel | This denotes the channel number in the audio file. Each channel will contain independent speaker's voice data. |
speaker | This is the wrapper object which defines the speaker for this channel. |
speaker
has the following members:
Field | Description |
---|---|
name | Name of the speaker. |
email | Email address of the speaker. |
caution
Billing for a speaker separated channel audio file happens according to the number of channels present in the audio files. The duration for billing will be calculated according to the below formula:
totalDuration = duration_of_the_audio_file * total_number_of_channels
So if you send a 120-second file with 3 speaker separated channels, the total duration for billing would be 360 seconds or 6 minutes.
#
Response#
Response ObjectParameter | value |
---|---|
conversationId | ID to be used with Conversation API. |
jobId | ID to be used with Job API. |