Hey all, I have a question - I currently have an AI assistant with the following setup: Frontend VoiceFlow -> Backend Replit -> OpenAI. The use case is the following - when a user asks the AI assistant a question it is being sent via POST request to my backend which then fetches a response from OpenAI. So my issue with VoiceFlow is that when a user types a question and if OpenAI is busy and takes longer to respond (more than 60 seconds) I get a "Unable to fetch response" in VoiceFlow, the thread is still active and the chat is active but the user doesn't get any response (I do get a successful response from OpenAI and my backend still working but VoiceFlow doesn't display it). If the user asks another question the chat continues as usual.
So I am looking to replace VoiceFlow with Botpress, I am just watching tutorials and going through everything and I find Botpress a lot cooler and better so far BUT I can't seem to find any information on whether my chat will have the same behavior in terms of waiting for a response from the backend. So my question before I go all in in Botpress is - if I make an API call and it takes longer than 1-2 minutes to respond does the chat end? Does the chat return an error? Does the chat just freeze or just not return anything and then proceed to wait for another user input?
Can I setup the following in Botpress:
A user types a question that requires OpenAI to run through a very big database (the process takes 85 seconds for example), every 45 seconds Botpress sends a text to the user saying "Sorry for the delay, give me a moment" while in the meantime it continuously waits for the response from my backend at Replit. At no point in this time does the chat end, freeze or NOT send a response to the user. When OpenAI is ready, it sends the response back to Botpress through my Replit server.