Hallucinations KB
# 🤝help
p
Hi everyone! 🙂 My Report ID is: e0e9b618-ae80-4ed0-b4f3-be8c24300d3d I'm working on creating a chatbot for a motorcycle dealer. In the knowledge base, I've included information about various motorcycles, all with the same details. I've encountered situations like these: When I ask, "I have an A2 license, what motorcycle can I buy?" I don't get a response (this information is in the Knowledge). However, if I ask, for example, "I have an A3 license, what motorcycles can I ride?" I get a response. I inquire if they sell electric motorcycles (information NOT present in the Knowledge), and it responds "yes." I ask if they have pink-colored motorcycles (a color not listed among the available options), and it responds "yes." The concerning issue isn't just that it replies "I don't know the answer" when it should actually know, but the fact that it responds on its own to questions it doesn't have the answers to. This is highly problematic for the company, as it could make claims about things that don't exist! Then I tried to structure the KB in a different sections text. Now, it no longer has hallucinations, but it often happens that it responds with "I don't know the answer," even though the answer is clearly written in the KB. Moreover, it's unable to provide complete information. For instance, if I ask, "What white motorcycles do you have available?" it only responds with one model instead of all the ones that are actually available. Do you have any other ideas? It's been over a month now that I've been facing issues with the KB, and I'm not sure how to fix it. Do you know how to fix it? Thank you very much!🙏

https://cdn.discordapp.com/attachments/1146454178210791444/1146454178433085530/Knowledge_moto_2.png

https://cdn.discordapp.com/attachments/1146454178210791444/1146456953455923331/Knowledge_moto_4.png

https://cdn.discordapp.com/attachments/1146454178210791444/1146456953816612906/Knowledge_moto_3.png

b
So I've had this issue, the best way to deal with it is with what's known as "Chain-of-thought prompting" basically, instead of having it process the entire thing at once, you feed it little by little. This explains it better https://arxiv.org/pdf/2201.11903.pdf but feel free to DM me if you want to ask anything specific about it 🙂 Spero di essere stata in grado di aiutarti 😁
p
🤗 Thank you! I'll try and let you know, Grazie! 🙂
c
We have a question in our #1132033181076443147 that covers a similar use case
When dealing with repetitive data, such as tables or lists of options, GPT may struggle to consolidate results.
Check out this thread to see how you can use the Tables feature to have the bot generate accurate answers: https://discord.com/channels/1108396290624213082/1135590661316415658/1135593135196610571
We suggest using the table feature in that use case.
p
Thank you Sabrina! I try to apply this solution 🤗
Maybe it's a stupid question but, how can I insert the table I'm creating in the Knowledge resources?
c
Let me see.. I think we have a video
b
@crooked-van-25152 I am positive there's also a video about how to easily do Chain-of-thought in case @powerful-restaurant-64772 still wants to try it later, but I can't find it, if by any chance you stumble across it please share it here too.
c

https://youtu.be/ydIyOXIPmhU

this is the video for the tables
I'd start the video around 19-20 min. I think that's when Robert starts talking about tables
p
Perfect thank you!
c
Anytime!
I've asked the team for it.. will keep you posted 🙂
So you are right @bumpy-addition-21507 . We did have a video about how to easily do chain-of-thought. But we had to remove the video because the UI has changed too much since then.
I could send it here, eventhough it's unlisted...
But it might cause confusion
b
ah! thank you @crooked-van-25152, I knew I wasn't crazy haha. Well, I guess I leave that up to @powerful-restaurant-64772 then, if she thinks it can still help her despite the UI changes or not.
c
@acceptable-kangaroo-64719 might record a new one 😉
p
I don't know anything about the subject so maybe for me it could still be useful to have some conceptual basis if it were possible 🙂
c

https://youtu.be/gaskjHpMqAM

you should have access to it 🙂 but like I said, some things might not be accurate anymore.
p
Thanks so much @crooked-van-25152 ! 🙂 I saw the video about the tables that you shared, but it seems to me that it was referring to the creation of the table to obtain information...I can't understand how to use it in the KB, do you have any advice for me? Thank you
I'm trying to structure the kb in various ways but it keeps giving me wrong answers 😦
r
Facing the same thing where it will make up completely wrong answers and even provide a fabricated url showing 'www.example.com/product'. In the AI instructions, i do say not to make up products or URLs but clearly that's being ignored🙂
p
Yes, the chat sometimes also mentions the KB! 🫣
b
@powerful-restaurant-64772 what do you mean it mentiones the KB?
c
The citations?
r
In my case, it indeed referenced a document (products.txt) in the knowledge base and provided a link to it - If I recall this happened to me when I gave explicit details in the AI task instructions about products contained in that file.
c
You could always state in the KB description to never add the reference of the answer.
p
The chat say "As written on the document" or something similar.
c
Hey @acceptable-kangaroo-64719 , we need the big 💪 here! Do you mind looking into this please?
a
I've found success with this added to my prompt:
Copy code
md
Omit references to documents, chat transcripts, given context, provided knowledge, or chat history, but include their information. Do not ask the user to refer to documents.

It is better to say something brief than to say you don't have enough information.
c
Thanks Gordy!
p
Ok Thak you! And I have to add this prompt in the description of the KB or I have to activate the personality agent and write this sentence in the description? Thank you
a
in the personality agent
p
Ok thank you!
4 Views