Build Real-Time Assist Solution with Nebula LLM
Boost your customer service with our Real-time Agent Assist solution, powered by Nebula LLM. This guide shows you how to build a real-time assist for your teams, making them more effective and your customers happier.
Prerequisites
- Basic understanding of APIs
- Access to Symbl.ai's Nebula LLM API
- Knowledge of vectors and vector databases
Step-by-Step Guide
Step 1: API access
- Sign up on Symbl.ai. Login to Symbl.ai platform for the AppId and AppSecret.
- Use the Authentication API to generate an access token to make calls to Symbl.ai APIs.
Step 2: Indexing your Knowledge Base
- To get access to the Nebula LLM, sign-up here.
- To index the documents in the knowledge base, each document needs to be vectorized. The knowledge base might contain multiple documents with wide-range of topics. Break down multiple documents in the knowledge base into smaller, meaningful chunks.
- Use Nebula Embedding API to convert these meaningful text chunks into vectors via the Nebula embedding model
- Once the embeddings are created, store them and their associated content in a vector database for retrieval based on triggers.
For more information about using embeddings, see the Embedding API guide.
Step 3: Setting-up Triggers
- To assist your teams with the appropriate content, you can choose when the Real-Time Assist to kick-in and provide results. You can set up questions, trackers or specific topics as triggers
- Symbl.ai detects questions, and trackers for an ongoing conversation between the agent and the customer.
- Questions are detected automatically whereas trackers need to be configured by the company’s customer success managers so that when a phrase like ‘competitor mentions’ or ‘overcharge’ are mentioned by the customer, they can be identified.
- Symbl.ai provides 40 out-of-the-box trackers including general trackers and specific to contact-center. You can add these trackers on the platform and navigate to trackers management.
- To add new trackers to your account, create custom trackers specific to your domain along with the tracker name and vocabulary.
For more information about using trackers, see the Trackers guide.
Step 4: Streaming Conversations
- To stream the conversation, use Symbl.ai’s Web SDK. The Web SDK provides a bi-directional stream which streams the conversation and also shows the knowledge base results to the representative.
- Install Web SDK with a simple
npm
command and import the latest version. Use thecreateAndStartNewConnection
method in the Symbl class to stream the audio to Symbl.ai. For more information about methods, see the Web SDK reference. - Vectorizing the trigger: To get the appropriate content suggestions from the knowledge base, the embeddings in the vector db should be queried with triggers to match similarities.
- Identifying similarities: To identify similarities, the trigger from the stream should be converted into a vector via the Nebula Embedding API. Once the trigger is vectorized, query the vector with a vector database to identify similar vectors and its associated content.
- Once the similar content is identified, either you can directly stream the links to the representative via the bidirectional stream with Web SDK or send the content to Nebula LLM for summarizing and generating the right answer for the trigger. These outputs will be sent as events from Web SDK.
- This callback response will contain the summary of an answer along with links pointing to the document for the associated trigger. For more information about events and callbacks, see the events and callbacks page.
If you have any questions or comments, contact us at [email protected].
Updated 11 months ago
What’s Next