BF Streaming Proxy
The streaming proxy allows you to stream generative AI directly into a BF app via messaging.
Workflow
Back end (FMS) generates the prompt and data needed for the LLM API plus the target channel for the results
Streaming Proxy forwrds request to the LLM
LLM Streams results back to proxy
Streaming Proxy sends messages to the target channel and calls a named action to handle the results
The browser receives the messges and processes results.
Notes:
Currently there is not rate throttling on the Proxy as the LLM generation is generally much slower than message rate limits. There can be bottle next when u[dating the front end UI if there is a lot of post processing though.
Last updated