A chatbot to predict an outcome.
# 🤝help
m
Hi, I'd like to create a chatbot that uses its KB to analyze the user's answers to predict an outcome. For example, "Would you like to know if your product is eligible for our government financial support? If yes, I need you to answer the following questions about your product." So, the chatbot asks a series of questions to the user. I suppose this step involves creating a node that contains several "Capture Information" cards. I was mostly incorporating the "Raw Input" card. And then, here's where I struggle. I do not know how to make the chatbot predict an outcome (i.e., yes, eligible or no, not eligible) by analyzing the user's answers using the KB. I also would like the chatbot to explain the reasons for the outcome. E.g., Yes, it is eligible because ... ... . I've tried different cards. But the chatbot's answers were not what I expected. Its answers were either summarizing the user's answers, showing all information from the KB without any relevancy to the user's answers, or just completely wrong although the user's answers clearly indicated that the product is not eligible. Thank you. Best regards, Silvan
f
Hey there Silvan👋 sounds like a fun project. You have several options here. Simple option: Use an AI Task. Input the criteria for product eligibility into the AI Task and let it handle the determination process. This method is straightforward but depends heavily on the AI's built-in capabilities. Intermediate option: Fine-tune a pre-existing model. By fine-tuning a model specifically with the information about your products and their eligibility criteria you're likely to achieve more accurate results compared to a generic AI task. This approach requires some additional time and expertise to implement but can be more reliable. Advanced option: Develop a custom machine learning model. This is the most involved method, requiring significant time and technical knowledge. Creating your own model allows for maximum customization and potentially the highest accuracy as it can be tailored specifically to your requirements and data. These are just starting ideas. I am not a legal expert, but keep in mind that what your chatbot says might be binding. So if it says that a product is eligible then it might be difficult to take back.
q
Yes, I was just going to say the same! I love these! 🔥 🛠️ 💎
@modern-caravan-98356 I have many projects here where I try to figure out something similar #1132038253109837994 Solving this perfectly would also solve most of my problems, at least all my money problems. So I'm going to try this as well 💡
I created an example knowledge base text, detailed rules and conditions related to product eligibility for government financial support
then I created an Execute code card and stored the answers to 'usersQuery' variable
Copy code
js
workflow.usersQuery = {
  product_type: workflow.productType,
  manufactured_locally: workflow.manufacturedLocally,
  price_range: workflow.priceRange,
  primary_market: workflow.primaryMarket
}
⬆️ I've found out that this JSON style format works the best for those queries, AI understands it
check here for prompting strategies for better results when using AI Generate Text (and why I used that exact format) https://discord.com/channels/1108396290624213082/1231503419278360576
Some development ideas for that use case: Optional (but recommended): Before querying the knowledge base, use a chatflow to first allow users to confirm or correct the bot's understanding of their information. Optional: In the end, ask follow-up questions for any incomplete or unclear information. Test your chatbot across different scenarios to verify accurate predictions. Improve the knowledge base and decision-making logic (prompting in this case) based on test results.
@modern-caravan-98356 Keep us updated on your progress and results 🚀
I have four more ideas / comments for this, I’ll write those here tomorrow 🧑🏽‍💻
also some changes to the bot file, but it works well already
m
Thank you for your explanation. Wow, I have a lot to learn. This is my first BotPress project, by the way. " These are just starting ideas. I am not a legal expert, but keep in mind that what your chatbot says might be binding. So, if it says that a product is eligible then it might be difficult to take back." --> Yeah, actually, this is the only thing that my chatbot never fails to do. After stating the prediction outcome, my chatbot never failed to remind the user to directly consult with relevant authorities for more accurate information. That's why I'm kind of confident that I've done something right somewhere in my chatbot construction.
Thank you so much for your detailed guidance. I need time to digest this and retry. I did something similar to "Add_Stuff_Here" node after watching Robert's "EP 10: Building a Bot from Start-to-Finish" YouTube. I also used "Query Knowledge Bases" and "Execute code" cards. But my prompt in the "Execute code" card was probably the issue. By the way, I also watched a YouTube video by Joren Wouters, "How To Create an AI Chatbot For Free in 2024 (+ Template)." His "Query Knowledge Bases" card uses "{{event.payload.text}}." I followed this, but I don't understand what it means. How does one decide what to type in "Query"? I will look at your guidance more carefully and try to replicate what you did. I look forward to your ideas and comments.
q
Building guides like this, or trying to solve some hard problems based on the documents, without using this in the prompt 'without explicitly referencing the Knowledge Base' many times AI answers with "The Knowledge Base states," "from the Knowledge Base," or "based on the information I retrieved from the documents provided." so if you need it to just give the corret answer, withour referencing where it got the answer from this time, give AI those instructions also in your prompt.
I used this in the prompt as well: 'Focus on directly integrating the criteria into the explanation as if they are universally understood standards.' Without that, the AI always recommended in the end to contact some other authorities as well, sometimes the recommendation was good, sometimes not so good. I don't need that in my projects. I want the AI to use the info and documents I gave it as universally understood standards, no other advice needed. If I were to build something involving legal matters, I would add to the end of the prompt something like 'Always end the response by mentioning exactly this [YOUR RECOMMENDATION]', so it always gives the same recommendation after the answer, and not just randomly create those.
in the query, I used only this JSON (stored in variable called usersQuery):
Copy code
json
{"product_type":"electronic device","manufactured_locally":"yes","price_range":"above $500","primary_market":"consumer electronics"}
since I have found that the AI understands that format really well, even better than my text prompts. And it works well in my projects. In your project, you might want to add some text before the query for it to work even better, like 'Verify this product's eligibility for government financial support {{workflow.usersQuery}}'. https://cdn.discordapp.com/attachments/1232698972510879744/1232930807086317568/image.png?ex=662b3fcb&is=6629ee4b&hm=56af7fe815814e217b0b23e1911cff01e95a76bda529a873632f9fa813b6aa57&
key characteristics of JSON format (which is easy for humans to read and write and also equally easy for machines to parse and generate) -data is in name/value pairs -data is separated by commas -curly braces hold objects -square brackets hold arrays https://cdn.discordapp.com/attachments/1232698972510879744/1232931708215496744/Screenshot_from_2024-04-25_08-49-31.png?ex=662b40a2&is=6629ef22&hm=0e9ce1d9752895ac41017fc17abddb1b0e164ecd9dd13a0c484fccc4c3d29a79&
If it should meet ALL the criteria to be 'eligible', and otherwise answer 'not eligible' with an explanation, I changed the prompt to this: Using the provided essential details and the user's query, determine if the product meets ALL the eligibility criteria for government financial support. If ALL criteria outlined in the knowledge base are satisfied, provide a concise verdict of 'eligible.' If not all criteria are met, the verdict should be 'not eligible.' Follow with a brief summary that explains the decision, specifying which criteria were or were not met without explicitly referencing the Knowledge Base. Treat the criteria as universally understood standards. # RULES AND CONDITIONS @workflow.queryResult # QUERY Verify this product's eligibility for government financial support # ESSENTIAL DETAILS @workflow.usersQuery https://cdn.discordapp.com/attachments/1232698972510879744/1232936969244246026/image.png?ex=662b4588&is=6629f408&hm=9a89b1bc30d34a17a5b98b29913d0869cbc13b1829e88a415b23ef4d678dba99&
m
Thank you for the detailed explanation. I'll try again.
q
m
Thank you
I have tried the above. The chatbot works better than before. The workflow is also more organized. Thank you so much! However, the answer it produces is still incorrect. It keeps saying "Eligible" although the user's answers should obviously lead to "ineligible." Also, the reasoning for the answer is only a summary of the user's answers, rephrased in a way that suggests the product is eligible. The chatbot doesn't seem to refer to the KB at all. My suspicion of the reason for this issue is because I use many non-English source documents. Is BotPress able to automatically understand non-English source documents? Should I translate these documents or use English-only documents?
q
Maybe the language might be the issue then 💡 Did you import and test the bot file I sent? When trying that chatbot, it always gives the correct answers, Eligible or Not Eligible, based on only the criteria in the documents, always takes answers from the knowledge base only (as it has been instructed) and never gives summaries.
I'm building these for three different fields at the moment, and all these use cases from other fields (like this one) are an excellent way to find out all the issues (which I might not have found in my projects yet).
And by "I have tested," I mean that only a few times, max 10 times, so obviously I haven't done nearly enough testing yet to find all possible issues 🛠️ 💎
I'm trying again, with these questions and answers (it should pass): Product type? Agricultural Equipment Manufactured locally? yes Price range? above $1000 Primary market? agricultural sector
Now I'm changing some of the answers (it should fail) Product type? Agricultural Equipment Manufactured locally? no Price range? below $500 Primary market? agricultural sector
m
Sorry, I should have asked earlier. I haven't opened your files yet. How to open those files?
I was able to open one of them with Notepad.
q
Sorry that I didn't give clear enough instructions 🙏 The best way to try that chatbot file I uploaded here is, after you've downloaded the file, create a new Botpress chatbot, and on the top left corner you select can 'Import from file' and start testing it 🛠️ https://cdn.discordapp.com/attachments/1232698972510879744/1233234710562340894/image.png?ex=662c5ad3&is=662b0953&hm=c05fc4d3c05c78f51545d05cd9c7bf0cff1fefff3bc4665f8b3dd5ceef0668a0&
m
Ah, ok, got it. I'll try that. Thank you
q
I'll help you build and test as many versions as it takes until it works 🛠️ Since I need something similar for my own projects, every test and problem we solve will also help my projects 💎
m
Thank you. I'm tyring it now. I'm comparing my workflow with yours.
Amazing!!!! This time, my chatbot can properly think. I've only tried once. But the answer shows that it really went through the KB to provide an argument. It wasn't just simply rephrasing the user's answers. I'm going to try with different product profiles just to be sure. I think my mistake was in "Query Knowlege Bases." I only provided the address without the instruction. (i.e., {{workflow.usersQuery}}).
Thank you!
By the way, I forgot to mention, this is probably a glitch, just before the chatbot showed its answer, it repeated the user's answer to the last question.
q
Thanks for letting me know! ⚡ From the pictures I shared, the chatbot always gave the final answer after the user's last message and didn't repeat it. But I'll check to see if I notice that issue too
m
After a couple of tries, I found that the chatbot randomly repeated the answers. Sometimes the first answer, sometimes the second. Although strange, I think it is not a bad thing because it kind of confirming the user's answer to the question. I don't know, maybe this is a built-in "behavior"?
By the way, the card after "Query Knowledge Bases," why did you use "AI Generate Text" instead of "AI Task"? I was reading the information about both cards. To my understanding, "AI Generate Text" is only for generating, well, text. But the prompt in the AI Generate Text looks more like AI Task, "Using the provided essential details and the user's query, determine if the product meets ALL the eligibility criteria for government financial support. ... ..." If AI Generate Text can also do this, how is it different from AI Task?
7 Views