BF Streaming API

The BetterForms Streaming API allows real time streaming of responses from LLM services like OpenAI and Google Gemini to be initiated.

Overview

Creating a call to this service will create a post to the LLM and stream results back to the appropriate channel. This service is in beta release as of the time of this document.

Avail in bf-staging only

//create endpoint portal.myapp.com//stream/create

Method: POST

Key
Type
Description

apiKey

string

Unique API key for BF (find in app settings)

channels

array

Array of BF Messaging communication channels

actionName

string

Name of the action to be performed

payload

object

Object containing parameters for the action, this contains params that are passed on to the streaming service

payload.provider

string

Name of the AI provider to be used (e.g., "openai", "gemini"). Defaults to "openai" if not specified.

payload.apiKey

string

Unique API key for the selected AI provider

payload.stream

bool

Indicator if streaming is enabled. If false, result is returned directly in the POST response.

payload.messages

array

Array of messages to be processed. Structure may vary slightly based on the provider.

payload.model

string

Model to be used for processing (e.g., "gpt-4-turbo-preview" for OpenAI, "gemini-pro" for Gemini)

payload.generationConfig

object

(Optional - Gemini specific) Object containing generation parameters for the Gemini provider.

{
  "apiKey": "BFAPI_xxxxxxxx-xxxxxx-xxxxxx",
  "channels": [
    "anonymous"
  ],
  "actionName": "assistantReceiveResultsStream",
  "payload": {
    "provider": "openai", // or "gemini"
    "apiKey": "sk-xxxxxxx-xxxxxxxx-xxxxxxx", // Your OpenAI or Gemini API Key
    "stream": true,
    "messages": [
      {
        "content": "You are a helpful assistant.",
        "role": "system" // For Gemini, 'system' role might need to be adapted or handled as part of 'history'
      },
      {
        "content": "Tell me a fun fact about space.",
        "role": "user"
      }
    ],
    "model": "gpt-4-turbo-preview", // or a Gemini model like "gemini-pro"
    // "generationConfig": { // Example for Gemini
    //   "temperature": 0.7,
    //   "maxOutputTokens": 2048
    // }
  }
}

Last updated

Was this helpful?