The streaming message feature allows you to stream bot messages directly from the LLM provider such as OpenAI or Anthropic to Sendbird channels. Streaming message is when the words of your bot responses are displayed as they are being generated. This allows you to show the response to the user as soon as the first word is generated, rather than waiting for the entire response to be generated before displaying it. This can dramatically reduce the end-user’s wait time and improve user experience.
The following is an example of how to integrate the streaming message feature into your application.
PythonJavaScriptTypeScript
import json
from openai import OpenAI
import requests
DEFAULT_PROMPT = """
You are a helpful assistant from Sendbird
"""
def stream_chat(prompt: str):
first_chunk = True # Flag to identify the first chunk
client = OpenAI(api_key='YOUR_API_KEY')
response = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=[
{"role": "system", "content": DEFAULT_PROMPT},
{"role": "user", "content": prompt}
],
stream=True # Enable streaming of response messages
)
for chunk in response:
content = chunk.choices[0].delta.content
if content is not None:
if first_chunk:
# If it's the first chunk, send the initial message along with the channel URL.
data = {
'channel_url': 'SENDBIRD_CHANNEL_URL',
'message': content,
#'target_message_id': 'ID of the user message you are responding to' (optional)
}
yield json.dumps(data)
first_chunk = False # Update the flag so that subsequent chunks are sent as deltas
else:
# For subsequent chunks, send only the delta of the message.
yield json.dumps({'message_delta': content})
res = requests.post(
'https://api-{APP_ID}.sendbird.com/v3/bots/{bot_userid}/send_stream',
data=stream_chat('Tell me about Sendbird'), # User input
headers={'Api-Token': 'YOUR_API_TOKEN'},
)
print(res.content)