Knowledge base answers inconsistent
# 🤝help
b
Hi! I'm using the knowledge base on a certain node. But I can tell when I'm testing this specific node that the answers are inconsistent when giving the same questions. How can I solve this? Is it possible to get answers a bit more controlled like using the AI. For example by setting the temperature to 0, or by giving instructions as you can do with the chatGPT function?
a
That's a great question! It's due to how LLM's work and is the central part of how Botpress does AI. LLM's by default have a bit of randomness to them (even with a temperature of zero), just like a human would. We don't cache / save the results because that would remove a feature of LLM's, but maybe we could add it. A cached KB for always returning the same results given the same inputs would be a great candidate for #1111026806254993459 !
b
Thanks Patrick for your help. Do you maybe know how I can solve my problem then? I have a list of 60 names, with a corresponding description and a corresponding link together. I did put this list of information in a document and uploaded this as a knowledge base. And then when a chatbotuser asks about a name, I want the knowledge base exactly to respond with the name and the corresponding description and the corresponding link. At this moment, it sometimes sends the name and description only. And sometimes it adds the corrsponding link. Depending on how the chatbotuser forms his question. And I also tried to put this list of “60 items” in the chatGPT description with a accurate way how to respond to, but than there’s a maximum of 5000 symbols and the list of 60 items doesnt fit. What do you think is a good solution?
a
@bitter-oil-35012 We just released a feature called Tables that does exactly this! let us know what you think 🙂
c
@acceptable-gold-88171 I'd like to follow up on this. I have been receiving wildly inconsistent responses from the Knowledge base querying. I got responses that ranged from the correct response from the KB, to just responses that said the answer was in the KB but without providing the answer, all the way to could not provide an answer from the context all within the same conversation. Why is this happening and how can I prevent this from happening again? I was under the impression that LLMs with 0 temperature would give consistent output provided the input is consistent. If there is variability in the input prompt or the similarity search results from the KB, that could cause variability in the output but given a fixed input and 0 temperature, I believe the output should be deterministic. Is that accurate?
f
Did you solve this?
c
No, it was inconsistent. Hard to reproduce. Still waiting on a response from the Botpress team
a
A temperature of 0 will make it more deterministic, but it's not perfect unfortunately.
f
I've found this to help: - Segment your KBs into as many specific categories as possible (so that you can write highly specific descriptions for each), - Improve the quality of your KB descriptions, - break your AI either into multiple AI tasks OR replace some of the functions you are requesting it to do with simpler things like expressions/intents THEN use the AI task for the remainder. If you request the AI task to do too much it doesnt work properly.
c
Thanks Matthew! All your recommendations make sense as best practices to follow. I was just confused as to why I was getting different responses from the KB if the temperature is set to 0 in the same conversation by just asking the same question over and over. Each time I asked I was getting different responses ranging from the correct answer to "the answer is in a pdf document" all the way to no answer from the KB
f
Yes, I had this issue as well. I found that my AI task instructions was too big for it to handle. It it would either give me incorrect answers, no answer, incomplete answers My fix was doing the above. 🙂
c
Good to know, thank you!
163 Views