Generate automated meeting notes with Nebula LLM
In the following tutorial, we’ll describe how to generate automated meeting notes using Symbl.ai’s collection of APIs and the Nebula large language model (LLM). This tutorial will provide step-by-step guidance on how to generate a transcript from a meeting, obtain a summary of the discussion, extract key points, and action items.
Step 1: Getting started
- This tutorial assumes that you have created an account on the Symbl.ai platform, have access to Nebula LLM and have completed the necessary authentication. If you require assistance on any of these steps, refer to our docs on getting started and how to authenticate. If you need access to the Nebula Playground and model API request access here.
- Use your app_Id and app_Secret to generate an access token:
import requests
import time
# Obtain the API Key
auth_response = requests.post(
"<https://api.symbl.ai/oauth2/token:generate">,
headers={"Content-Type": "application/json"},
json={"type": "application", "appId": app_Id, "appSecret": app_Secret}
)
api_key = auth_response.json()["accessToken"]
Step 2: Generate a meeting transcript (Processing)
Step 2a: Processing a conversation
- The first step in generating automated meeting notes is to be able to process a conversation or meeting. This can be done using Symbl.ai’s Async API interface for recorded or saved conversations in video, audio and text formats available either as a URL or a file. Approved format for the video file is mp4 and for audio files are .mp3, .wav/wave respectively.
- To process video files from a URL to the Async API, use the following operation on the API explorer on the Symbl.ai platform or external platforms such as Postman:
POST <https://api.symbl.ai/v1/process/video/url>
Following is a code snippet in Python for submitting a video URL:
# Process the Meeting (Video URL) -> Replace name, URL and authorization placeholders
import os
import requests
access_token = "\<ACCESS_TOKEN>"
symblai_params = {
"name": "<NAME>",
"url": "<URL>"
}
headers = {
"Authorization": "Bearer " + access_token,
"Content-Type": "application/json"
}
response = requests.request(
method="POST",
url="<https://api.symbl.ai/v1/process/video/url">,
headers=headers,
json=symblai_params
)
print(response.json())
- To process audio files from a URL, use the following operation:
POST <https://api.symbl.ai/v1/process/audio/url>
Following is a code snippet on Python for submitting an audio URL:
# Process the Meeting (Audio URL) -> Replace name, URL and authorization placeholders
import requests
access_token = "\<ACCESS_TOKEN>"
symblai_params = {
"name": "<NAME>",
"url": "<URL>"
}
headers = {
"Authorization": "Bearer " + access_token,
"Content-Type": "application/json"
}
response = requests.request(
method="POST",
url="<https://api.symbl.ai/v1/process/audio/url">,
headers=headers,
json=symblai_params
)
print(response.json())
<ACCESS_TOKEN>
is a valid API access token.<NAME>
is the name of the conversation.<URL>
is a direct URL to a supported audio file.
Once you process a video or audio URL, you will get a conversationID - a unique identifier of a conversation that is submitted to Async API
Step 2b: Creating a formatted transcript
- Once a conversation has been successfully processed, a formatted meeting transcript can now be generated. To get a transcript of your conversation, use the following operation:
GET <https://api.symbl.ai/v1/conversations/{conversationId}/transcript>
Following is a code snippet in Python for generating a formatted transcript:
# Produce formatted transcript -> Enter appropriate conversationID and choose payload parameters appropriately (see below for guidance)
import requests
url = "<https://api.symbl.ai/v1/conversations/conversationId/transcript">
payload = {
"contentType": "text/markdown",
"createParagraphs": True,
"phrases": {
"highlightOnlyInsightKeyPhrases": True,
"highlightAllKeyPhrases": True
},
"showSpeakerSeparation": True
}
headers = {
"accept": "application/json",
"content-type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
There are multiple features on the platform that allow you to customize the generated transcript to meet your specifications as follows:
- Choose the SubRip Subtitle (SRT) format for your transcript by adjusting the ‘contentType’ parameter to text/srt.
- Refine the transcript output by setting the ‘createParagraphs’ parameter to ‘True’ to obtain well formatted transcripts.
- Identify unique speakers (speaker diarization) in the conversation and their utterances by setting the ‘showSpeakerSeparation’ parameter to ‘True’.
Step 3: Generate meeting notes with Nebula (Summary, Key Decisions & Action Items)
- Having obtained the meeting transcript, you can use Nebula to perform a range of conversation intelligence tasks such as generating summaries, determining key decisions made, listing action items and more.
To use the Model API:
- Add your ApiKey to the header. You should have received the key when you were granted access to the model.
- Provide a prompt parameter along with the instruction and conversation in the request body for the Model API to generate the desired output.
Step 3a: Generate Summary using Nebula Model API
- By copying the conversation transcript and providing an instruction to “Generate summary of this conversation”, you can obtain a summary:
Following is a code snippet in Python for generating a summary using Nebula Model API:
import requests
import json
url = "<https://api-nebula.symbl.ai/v1/model/generate">
# Prepare the request payload
payload = json.dumps(
{
"prompt": {
# Your instruction or question goes here
"instruction": "Generate summary of this conversation.",
"conversation": {
# Your conversation transcript goes here
"text": "Representative: Hello, How are you?\\nCustomer: Hi, good. I am trying to get access to my account but ....",
}
}
"max_new_tokens": 256
}
)
# Set the headers with the API key
headers = {
"ApiKey": f"{api_key}",
"accept": "application/json",
"Content-Type": "application/json",
}
# Make the POST request
response = requests.request("POST", url, headers=headers, data=payload)
# Process the response
if response.status_code == 200:
data = response.json()
print("Output:\\n", data)
else:
print("Request failed with status code:", response.status_code)
print("Error message:", response.text)
Step 3b: Generate key decisions using Nebula Model API
- Use the above code sample and simply replace the instruction parameter to ask questions about the conversation and obtain key decisions from the discussion. For example, an instruction such as “What are the key decisions made by XYZ in the conversation?”.
Following is the relevant section of the code snippet to be updated with the appropriate instruction
"prompt": {
# Your instruction or question goes here
"instruction": "What are the key decisions made by XYZ in the conversation?",
"conversation": {
# Your conversation transcript goes here
"text": "Representative: Hello, How are you?\nCustomer: Hi, good. I am trying to get access to my account but ....",
}
}
Step 3c: List action items using Nebula Model API
- Nebula can also be used to analyze the conversation transcript and identify the action items pertaining to the conversation. To do this, use the above code sample and simply replace the instruction parameter to obtain action items from the discussion. For example, an instruction such as “List the key action items for XYZ in the conversation?”.
Following is the relevant section of the code snippet to be updated with the appropriate instruction
"prompt": {
# Your instruction or question goes here
"instruction": "List the key action items for XYZ in the conversation?",
"conversation": {
# Your conversation transcript goes here
"text": "Representative: Hello, How are you?\nCustomer: Hi, good. I am trying to get access to my account but ....",
}
}
Note:
- The exact wordings of the instruction in the prompt parameter should be updated based on requirements.
For a no-code option, use the Nebula Playground to obtain the above insights by pasting or attaching the transcript and typing in the instruction.
Updated 10 months ago