Solutions from Documents
# đź“–tutorials
q
Solutions from Documents Chatbot workflow 1. Create a solution from the Botpress knowledge base using GPT-4. 2. Share the solution and top-5 related Botpress KB chunks with other LLMs. 3. Ask LLMs to improve the solution using the provided chunks and user's question. 4. Rate the different solutions (1-5 stars). 5. Display all solutions (original and 4 improved solutions) and their ratings to the chatbot user. Show the project to the client and repeat the process from steps 1-5 until the deal is secured. I'll add all the code, prompts, etc., and upload the bot file after I have tested enough. I'm going to build it so that when other bot builders are testing or start using it, just replace your API keys and you're good to go. Solutions from the knowledge base are different than answers from the knowledge base; LLMs need to create solutions based on the official documents and code examples, even though the exact same tasks are not in the documents. You can change the number of chunks (5 in this example) sent to other LLMs so that they have enough related context in this code
Copy code
js
workflow.chunks = event.kb.results.slice(0, 5).map((a) => a.dsFriendlyName + "\n" + a.content).join("\n\n")
Don't send all 50 chunks from the Botpress KB, it costs too much and may not even fit the LLMs' context window. Originally, I built this using the Cohere API and Pinecone vector database, but here we are all about doing things in Botpress. I want to use this to build a chatbot that allows everyone to build blockchain smart contracts like a pro, when the KB is official documents and the best GitHub code examples, and I'm getting feedback from blockchain developers whose job is to build smart contracts. And my friend who is a lawyer, has been searching for more than a year for a chatbot or other AI tool that can give correct answers when the KB is the country's law related to a specific field he's working on. If a multi-LLM chatbot can solve those as well, we all have clients.
c
very interested in learning more. the way I understand it is that you want the initial KB retrieval to be from Botpress after which wed use complementary LLM's to fact-check and/or clean-up the data. Am I understanding this correctly ? If so, the initial retrieval of the KB is crucial and from my personal testing the quality of the KB retrieval depends on the limitations of the model and the quality of the query/Prompt.
q
Exactly, you described every step and its challenges even better ⚡
c
let me talk about RAG more in this thread since it can bypass some of the issues talked about by improving the initial data retrieval . Question: if presented with the same question and data would the answer from a non-RAG AI be similar to that using RAG? Answer: Whether an AI gives the same answer through a custom Knowledge Base (KB) and a Retrieval-Augmented Generation (RAG) system can vary due to differences in their processing mechanisms and utilization of information. Here are some key factors: ### 1. Processing Mechanism - **Custom KB**: Uses a deterministic approach to retrieve directly related information based on fixed rules. Responses are consistent if the query matches the data in the KB. - **RAG System**: Fetches relevant documents from a larger corpus, introducing variability in retrieval and synthesis: - **Variability in Retrieval**: Different documents may be retrieved for similar queries depending on the retrieval algorithm. - **Generative Variability**: Responses vary based on how the generative model interprets and integrates the retrieved documents. ### 2. Consistency and Variation - **Custom KB**: High consistency in responses when the query matches predefined data, but less flexible. - **RAG System**: Might produce varied responses over time, adapting to new information and the context of retrieved documents. ### 3. *Example Scenario - **Using a KB**: Provides consistent answers if the KB includes the specific information asked in the query. - **Using a RAG**: Searches its corpus for the most relevant documents each time, possibly leading to varied responses depending on what's most relevant at that moment. ### Conclusion The consistency of the response from AI depends on how each system is implemented and maintained. A RAG system offers dynamic, context-rich responses, while a custom KB provides predictability and reliability. Each system has strengths suited to different application needs.
b
Hello, you seem to be a botpress artists 🙂 I tried to generate a chatbot pure in json, generated by gemini1.5. The code looks very promising, but i could not yet successfully import it into botpress. Do you think this is possible?? Is there a sample .json(s) somewhere, that uses a lot (all?) of botpresses functionality, so that i can give it gemini1.5. as a sample? (I am not using gpt-4o because the context window seems too small for this). Many thanks! Kind Regards, Chris
4 Views