Prompt design
Nebula is currently in private beta.
To request access, see the sign up form.
This page talks about how to design prompts for the Nebula LLM (large language model). The quality of instructions and conversations you provide Nebula in prompts can significantly affect the quality of its output, especially for complex tasks.
Understanding prompts
In contrast to a typical generative language model, a prompt for Nebula has two components:
- Instruction: A description of the task to perform or a question related to the conversation.
- Conversation: A text transcript of the conversation.
These components are exposed by the prompt
object as two subfields in the API request providing a clear separation between instruction and conversation to the model.
{
"prompt": {
"instruction": "your task instruction or question goes here",
"conversation": {
"text": "your conversation transcript goes here"
}
}
}
Creating prompts
Nebula is designed to perform tasks or find answers based on the conversation provided in the transcript and is not trained to be a conversational assistant. You should consider this when creating prompts.
Instruction
The instruction must describe the task clearly. The instruction should clearly describe and express intent about the task you'd like Nebula to perform in the provided conversation.
Here are some example instructions.
Provide sufficient details
You should avoid this type of instruction because it doesn't provide details and is too abstract.
Bad instruction (too abstract)
Summarize.
Although the instruction still works, it excludes task details. It may be good enough for a small number of requests. However, providing details clearly and concisely would improve the output.
For example, this instruction provides precise, concise details of the same task.
Good instruction
Generate a short summary of this conversation.
Specify relevant tasks directly on the conversation
Make sure the instruction provided contains relevant tasks to the conversation provided. Don't include generic tasks that are irrelevant to the conversation, such as asking for code or information about the world. Instead include tasks that you'd like Nebula to perform on the conversation such as analyzing the conversation, summarizing content, or drafting an email.
In the instruction, ensure that the task is directly performed on the conversation. For example:
Bad instruction (no explicit focus on conversation included)
Generate a short summary.
Good instruction (explicitly mentioned focus on conversation)
Generate a short summary of this conversation.
Good instruction (explicitly mentioned focus on conversation)
Based on this conversation, what are the key concerns expressed regarding launch plan?
Specify the conversation type
Whenever possible, instead of using the generic term "conversation," use a more specific term in your instruction. For example, you could indicate that it's a sales call, customer support call, or interview.
Good instruction (specific type of conversation term used)
What are the next steps for support representative based on this customer support call?
Avoid instructions that are out of context
Avoid asking general questions such as the following example. The model will likely not respond with correct output in such situations because it is not trained to answer general-purpose questions. Instead it will attempt to find the information in the conversation.
Bad instruction (not a conversation relevant question)
Who is Barack Obama?
The following question works well because Anne Marie is involved in the conversation.
Good instruction (question is relevant for conversation)
Who is Anne Marie in this conversation?
In the this example, the instruction works well if the conversation is a customer support call.
Good instruction (task is sensible for conversation)
Based on this conversation, make some recommendations for the support agent to help improve their performance in upcoming calls.
Transcript
The conversation transcript is a textual representation of the actual conversation. To get accurate high quality output, use transcripts with clear separation between speakers, and minimal inaccuracies or word-error-rate (WER). The model will still attempt to adjust a few errors whenever possible, but reducing errors and having clear and accurate speaker annotation increases the quality and accuracy of the results.
We recommend this format for transcripts:
<Speaker 1 Name>: <Sentences spoken by Speaker 1 continuously>
<Speaker 2 Name>: <Sentences spoken by Speaker 2 continuously>
.
.
.
The details are Speaker Name
and Sentences spoken by Speaker
separated by :
and one white space, then a new line \n
character. This format uses a minimal number of characters for separators and new lines. This format also avoids repeating the speaker's name more than required.
Although this is not a strict restriction, we recommend you optimize the token usage and attempt to achieve higher accuracy in the model's output.
Updated 3 months ago