Instruct Model
Nebula Instruct model allows a single request with instruction and conversation transcript to generate output.
The Nebula Instruct model takes an instruction and conversation transcript as input instead of a list of messages like the Chat model. This is the same as using a single human message in the Chat model with your instruction and conversation transcript. When the chat interaction with the model is not necessary, the Instruct model is a good choice for singularly focused tasks that can be described in a single prompt as instruction along with the conversation transcript.
Here's an example of the Generate API call to the Instruct Model:
export NEBULA_API_KEY="YOUR_API_KEY"
curl --location "https://api-nebula.symbl.ai/v1/model/generate" \
--header "ApiKey: $NEBULA_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"max_new_tokens": 1024,
"prompt": {
"instruction": "Generate a summary of the conversation emphasizing on Customer'\''s pain points.",
"conversation": {
"text": "Customer: Hi Mark, I'\''m really glad to speak with you today. I'\''m facing some challenges with our current cloud provider, and I wanted to share my concerns with you.\nMark: Of course, I'\''m here to help. Please feel free to explain the issues you'\''re experiencing, and we'\''ll see how we can address them.\nCustomer: Thank you. Well, we'\''ve been using this provider for a while now, but we'\''ve noticed that their services have been slow and unreliable lately. It'\''s affecting our business operations, and we'\''re losing productivity due to frequent downtime and performance issues.\nMark: I understand. What specific issues are you encountering when using their services?\nCustomer: There are a few things. First, their server response times are really slow, and it'\''s affecting our website'\''s loading speed. Our customers have been complaining about the site being unresponsive, which is impacting our user experience negatively.\nMark: That'\''s definitely a concern. What else?\nCustomer: Additionally, we'\''ve been experiencing data loss and inconsistencies. We'\''ve lost critical data a couple of times, and it'\''s affecting our data integrity and team morale. It'\''s becoming difficult to trust them with our sensitive information.\nMark: I see. Data loss is a serious issue. Are you considering any specific cloud providers in mind?\nCustomer: Yes, we'\''ve been researching a few options, and we'\''re interested in exploring DataFly. We'\''ve heard good things about your services and would like to know if you can help us with a smooth migration process.\nMark: Absolutely, we'\''d be happy to help. Our team can ensure a seamless migration and provide you with reliable, fast, and secure cloud services. Let'\''s discuss the details and find the best solution for your business."
}
}
}'
import requests
import json
NEBULA_API_KEY="YOUR_API_KEY" # Replace with your API key
url = "https://api-nebula.symbl.ai/v1/model/generate"
payload = json.dumps({
"max_new_tokens": 1024,
"prompt": {
"instruction": "Generate a summary of the conversation emphasizing on Customer's pain points.",
"conversation": {
"text": "Customer: Hi Mark, I'm really glad to speak with you today. I'm facing some challenges with our current cloud provider, and I wanted to share my concerns with you.\nMark: Of course, I'm here to help. Please feel free to explain the issues you're experiencing, and we'll see how we can address them.\nCustomer: Thank you. Well, we've been using this provider for a while now, but we've noticed that their services have been slow and unreliable lately. It's affecting our business operations, and we're losing productivity due to frequent downtime and performance issues.\nMark: I understand. What specific issues are you encountering when using their services?\nCustomer: There are a few things. First, their server response times are really slow, and it's affecting our website's loading speed. Our customers have been complaining about the site being unresponsive, which is impacting our user experience negatively.\nMark: That's definitely a concern. What else?\nCustomer: Additionally, we've been experiencing data loss and inconsistencies. We've lost critical data a couple of times, and it's affecting our data integrity and team morale. It's becoming difficult to trust them with our sensitive information.\nMark: I see. Data loss is a serious issue. Are you considering any specific cloud providers in mind?\nCustomer: Yes, we've been researching a few options, and we're interested in exploring DataFly. We've heard good things about your services and would like to know if you can help us with a smooth migration process.\nMark: Absolutely, we'd be happy to help. Our team can ensure a seamless migration and provide you with reliable, fast, and secure cloud services. Let's discuss the details and find the best solution for your business."
}
}
})
headers = {
'ApiKey': NEBULA_API_KEY,
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
You can take a look at the API reference to learn more about the Generation endpoints.
The input contains prompt.instruct
field which is a string containing the instruction to the model. The instruction is the command to the model to perform certain tasks. The prompt.conversation.text
field is a string containing the textual representation of the conversation which can be the transcript of the call or meeting, chat, email, social threads, etc. Alternatively, you can also pass a string with instruction and conversation text in a single string directly in prompt
field, to get more fine-grained control in your prompt engineering.
If you want to receive output as it is generated progressively, you can use the/v1/model/generate/streaming
endpoint.
Response Format
{
"model": "nebula-instruct-large",
"output": {
"text": "The customer is experiencing challenges with their current cloud provider due to slow and unreliable services, which is affecting their business operations and causing data loss. They are interested in exploring DataFly as an alternative and are seeking Mark's assistance with a smooth migration process."
},
"stats": {
"input_tokens": 405,
"output_tokens": 53,
"total_tokens": 458
}
}
You can access the assistant's response using:
response['output']['text']
You can further tune other parameters to control the generation behavior of the model.
Updated 9 months ago