BF Streaming API
Overview
Creating a call to this service will create a post to the LLM and stream results back to the appropriate channel. This service is in beta release as of the time of this document.
Avail in bf-staging only
//create endpoint portal.myapp.com//stream/create
Method: POST
apiKey
string
Unique API key for BF (find in app settings)
channels
array
Array of BF Messaging communication channels
actionName
string
Name of the action to be performed
payload
object
Object containing parameters for the action, this contains params that are passed on to the streaming service
payload.provider
string
Name of the AI provider to be used (e.g., "openai", "gemini"). Defaults to "openai" if not specified.
payload.apiKey
string
Unique API key for the selected AI provider
payload.stream
bool
Indicator if streaming is enabled. If false, result is returned directly in the POST response.
payload.messages
array
Array of messages to be processed. Structure may vary slightly based on the provider.
payload.model
string
Model to be used for processing (e.g., "gpt-4-turbo-preview" for OpenAI, "gemini-pro" for Gemini)
payload.generationConfig
object
(Optional - Gemini specific) Object containing generation parameters for the Gemini provider.
Note on OpenAI Parameters: Parameters like functions
, max_tokens
, seed
, and temperature
previously documented for OpenAI are not explicitly passed to the OpenAI SDK in the current service implementation. They might be subject to the OpenAI SDK's default behaviors or could be added in future updates.
Last updated
Was this helpful?