How to implement speaker separation with Async Audio or Video Files
Symbl's Async API allows you to process stored recordings of audio or video from files or URLs or even textual content from a conversation. In this guide, we will walk you through how to implement Speaker Separation with audio or video files. Speaker Separation, in short, is the ability to detect and separate unique speakers in a single stream of audio & video without the need for separate speaker events.
#
Contents- Enabling the Diarization
- Updating the Detected Members
- Appending to an Existing Conversation With Speaker Separation
- An Example Scenario
- The Email Identifier
- Best Practices
#
Enabling the DiarizationTo enable Speaker Separation in the Async Audio or Video API is as simple as adding the enableSpeakerDiarization=true
and diarizationSpeakerCount=<NUMBER_OF_UNIQUE_SPEAKERS>
query parameters of the request URL.
This snippet shows a cURL command for consuming the Async Video URL-based API which takes in the URL for a publicly available URL of a Video File:
- cURL
info
The above example uses the Async Video URL API, but Speaker Separation can be achieved with other Async Audio/Video APIs in the same way.
AUTH_TOKEN
needs to be replaced with the Bearer token generated during our authentication process.WEBHOOK_URL
can be replaced with a WebHook URL for receiving the status for the Job created after calling the API.For accuracy,
NUMBER_OF_UNIQUE_SPEAKERS
should match the number of unique speakers in the Audio/Video data.
The above URL has two query parameters:
Parameter Name | Type | Description |
---|---|---|
enableSpeakerDiarization | Boolean | Will enable the speaker separation for the audio or video data under consideration. |
diarizationSpeakerCount | Integer | Sets the number of unique speakers in the audio or video data under consideration. |
#
Identifying Unique SpeakersInvoking the members
call in the Conversation API will return the uniquely identified speakers for this conversation when Speaker Diarization is enabled.
#
Code ExampleView the API Reference for information on how to get member information.
#
JSON Response ExampleThe name
assigned to a uniquely identified speaker/member from a separated audio/video will follow the format Speaker <number>
where <number>
is arbitrary and does not necessarily reflect in what order someone spoke.
The id
can be used to identify a speaker/member for that specific conversation and can be used to update the details for the specific member demonstrated below in the Updating Detected Members section.
#
Getting the Speaker Separated ResultsInvoking the messages
call in the Conversation API would return the speaker-separated results.
#
Code ExampleView the API Reference for information on how to get speech-to-text messages from the conversation
👉 GET Messages
#
JSON Response ExampleThe above snippet shows the speaker in the from
object with a unique ID. These are the uniquely identified members
of this conversation.
info
Reminder: The speaker number in the above snippet is arbitrary and the number doesn’t necessarily reflect the order in which someone spoke.
#
Gain insightsSimilarly, invoking the insights
call in the Conversation API would also reflect the identified speakers in the detected insights.
#
Code ExampleView the API Reference for information on how to get insights from the conversation.
👉 GET Insights
#
JSON Response Example#
Updating the Detected MembersThe detected members (unique speakers) would have names like Speaker 1
as the automatic speaker recognition wouldn’t have any context to who this speaker is (name or other details of the speaker). Therefore, it is important to update the details of the detected speakers after the Job
is marked as complete
.
#
GET membersThe members
call in the Conversation API returns the uniquely identified speakers as shown in the Identifying Unique Speakers section above when the Speaker Separation is enabled.
Let’s consider the same set of members that can be retrieved by calling the GET members call in the Conversation API.
#
JSON Response Example#
PUT membersWe can now use the PUT members
call to update the details of a specific member as shown below. This call would update the Speaker 2
as shown in the above section with the values in the cURL’s request-body
:
- CURL
The
CONVERSATION_ID
needs to be replaced with the actual Conversation ID (conversationId
)The
AUTH_TOKEN
needs to be replaced with the Bearer token generated during our authentication process.
The URL has the id
of the member
we want to append to PUT /members
with the request body containing the updated name
of this member
.
There is also the option to include the email
of the member. The email
will be used as an identifier for tracking those specific members uniquely in that conversation. (Refer to the Updating the Detected Members section below for more details)
After the above call is successful, we will receive the following response:
The message
is self-explanatory and tells us that all the references to the member
with the id
of 2f69f1c8-bf0a-48ef-b47f-95ae5a4de325
in the conversation should now reflect the new values we updated this member
with. That includes insights
, messages
and the conversation’s members
as well.
So if we call the GET /members
API now, we would see the following result:
And similarly, with the GET /messages
API call, we would see the updates reflected below as well:
Curious about the GET /insights
API? It would reflect these updates as well!
#
Appending to an Existing Conversation With Speaker SeparationBecause conversations don’t neatly end at once and may resume later, our Async API allows you to update/append an existing conversation. You can read more about this capability here.
To enable Speaker Separation with the append capability, the request structure is the same as shown above for creating a new Conversation. You would need to pass in enableSpeakerDiarization=true
and diarizationSpeakerCount=<NUMBER_OF_UNIQUE_SPEAKERS>
query-parameters.
However, there is one caveat in how the ASR works with Speaker Separation. Consider the below:
#
An Example ScenarioWe send a recorded conversation to the Async API with 2 speakers John
and Alice
with enableSpeakerDiarization=true
. The diarization identifies them as Speaker 1
and Speaker 2
respectively. We then update the above speakers with their email
as john@example.com
and alice@example.com
.
Now we use the append call for appending another conversation with 2 speakers John
and May
with enableSpeakerDiarization=true
. Let’s assume that the diarization would now identify these as Speaker 1
and Speaker 2
respectively. As discussed before, these numbers are arbitrary and have nothing to do with the order in which the speakers spoke in the conversation.
After this job is complete we will have 4 members in this conversation:
John
Alice
Speaker 1
(Which isJohn
again)Speaker 2
(Which isMay
)
Since John
and Speaker 1
refer to the same speaker but are labeled as different speakers, their member
references would be different for all messages
and insights
that they are a part of.
#
The Email IdentifierThis is where the email
identifier comes in. The PUT members
call can uniquely identify and merge a member
with the same email
parameter and eliminate any duplicate references with a single reference across the entire conversation which would update all the references including the members
, messages
and insights
.
If we were to execute a PUT members
call with the below body where 74001a1d-4e9e-456a-84ed-81bbd363333a
refers to the id
of Speaker 1
from the above scenario, this would eliminate this member
and would update all the references with member represented by 2f69f1c8-bf0a-48ef-b47f-95ae5a4de325
which we know is John Doe
.
- CURL
This is possible because the email
uniquely identifies that user.
#
Best PracticesThe
diarizationSpeakerCount
should be equal to the number of unique speakers present in the conversation for best results as the Diarization model uses this number to probabilistically determine the speakers. If this number is different than the actual speakers, then it might introduce false positives for some part of the transcriptions.For the best experience, the Sample Rate of the data should be greater than or equal to
16000Hz
.