:tools: Timeout Issues with AI Generate Text in Bo...
# 🤝help
i
For several months now, I have been experiencing significant issues with the efficiency of responses generated by AI Generate Text in Botpress Cloud, despite the use of detailed and complex prompts. Initially, using GPT-3, the responses were quick but not of the desired quality. Moving on to GPT-4 Turbo, and recently GPT-4o, performance improves dramatically in emulation, but not in production due to timeouts. Despite increasing the timeout from 30s to 60s, more complex requests are often not fully processed, resulting in error messages such as: "60 seconds run out of time while processing card:ai-generate/node:XXXX/flow:YYYYY." The integration of GPT-4o showed significant speed improvements in the emulator, reducing processing time by 30%. However, these benefits don't extend to production, where consultations regularly stall with the message: "Card took too long to execute." I tried various solutions, including segmenting prompts and using the "Query Knowledge Bases" card to lighten the processing load. However, the improvements have only been partial. Question: Are there any configurations or techniques to overcome these timeout limits imposed by Botpress? Or should I consider integrating other models via external APIs (e.g. AI Stack) or move my implementation to another platform that can better handle such processing loads? I would appreciate any insights or experiences you could share!
Question: If I activate "Check always alive mode" (currently not active) i.e.: Always Alive Adds "Always Alive" functionality for 1 additional bot, will my user session timeout problems be solved?
4 Views