knowledge base trunkated
# 🤝help
a
I've been trying for 3 days to get the complete answer to the horoscope of the zodiac sign indicated by the user, but I don't understand why the answer is truncated... I uploaded a KB both as a doc (among other things it shows me some lines when inspecting the document I didn't understand what they indicate) that as plain text, in both cases the last paragraph is not printed by the bot (Consigli della Settimana), can you tell me where I'm wrong? or is there some setting that truncates the response? https://cdn.discordapp.com/attachments/1230206448143826954/1230206448722645042/Cattura3a.JPG?ex=6627ee09&is=66269c89&hm=fcd217966b081d86db7a1a9940184ad06aa34a45d65e7e3a753887eaec9c3ece& https://cdn.discordapp.com/attachments/1230206448143826954/1230206449158848582/Cattura2a.JPG?ex=6627ee09&is=66269c89&hm=d3983af421b5ff9bf63d809cd127fb90dcbc3b24983b168081caea45a236ed1a& https://cdn.discordapp.com/attachments/1230206448143826954/1230206449549049977/Cattura1.JPG?ex=6627ee09&is=66269c89&hm=f16dea991c0d33e52d1faf5dd35c28dad8cec92d081dd5768144fa2f23f8afbd&
b
can you show me an example of a truncated response?
a
I drew it in red on attach images, the text stops at the paragraph before , the chatbot reports all the text up to the paragraph Physical well-being and leaves out Tips of the Week (in italian you can read on image Consigli della settimana)
b
that's the information included in your knowledge base, but I'd like to see the truncated response in the emulator
so what the bot is actually sending as a response
b
hm, can you try making it clear in your KB that the tips of the week are associated with this particular horoscope?
it looks like the bot might not be registering these items as related
a
in what way? I wrote in the query to take all the text up to the next zodiac sign...do I need to modify the query?
b
can you export your bot and send the file here?
so I can test it out myself
a
yes
b
alright I was able to recreate the truncation issue
I have a question - do you want the bot to provide, verbatim, the same response to each query? e.g., if I'm a Leo, and my neighbour is a Leo, and we both consult your bot, will we both receive word-for-word the same response?
a
In what sense? the query changes based on user input
b
user input being their sign?
a
aries, sagittarius etc etc
yes
you write aries for exampel
b
but two people who input Leo will receive the same response?
a
yes
i update document one a week
b
in that case, I wouldn't recommend using Knowledge Bases at all. KBs work for providing custom, personalized answers to queries, but if you're looking to just provide static/verbatim messages, using KBs will actually be more expensive in the long run, since you're using AI tokens for no reason
instead, I would suggest something like this:
so, week-to-week, you would just update the 12 nodes like "Leo_Horoscope", rather than the Knowledge Base
this will both reduce your costs & prevent the truncation issue
a
Actually my idea was to even go and fish for the doc with zapier from the client's site, I can't do it in such a static way
b
ah, that's different from what you told me earlier about manually updating it weekly
a
I meant that the text is the same for all users.... it is static in this sense, uploading the file every week is an option in case you can't make the connections with zapier
Right now I'm trying to make it work statically by uploading the doc to botpress but as soon as it works I'll look at how to connect zapier
so I have to solve this text truncation problem
let's say that I'm also learning to make bots, I'm a neophyte, so learning to understand why these things happen is important for my training
b
gotcha
I'm still troubleshooting on my end~
a
Well I'm glad it's not entirely my inability to blame 😄
looks like there might indeed be some upper cap on what the LLM will send as a response
a
Yes but I don't understand the reason for this limit of characters in output :/ could it depend on the gpt 3.5 model? I don't know what you use by default bootpress
I see you had 'fastest' ticked here, which uses 3.5 exclusively
I swapped it over to Best, which uses gpt-4 by default, and interestingly:
a
Yes, I did a test to see if it was the model's fault
a
i try now ..wait 😄
b
so it does seem like gpt-4 seems to be sending larger responses
ofc gpt-4 will be significantly more expensive than 3.5
b
like I said above, LLMs don't excel at sending verbatim pieces of text, they excel at interpreting and generating responses
so they may occasionally interpret your request and spit out what they think you want to hear
a
Do you think if I change the query will anything change? "Extract the complete horoscope for this week's sign {{ workflow.rispostaoroscopo }} , without including any information about the next sign. Report all the text you find without leaving anything out."
b
you could try!
i changed query like this: Extract the complete horoscope for this week's sign {{ workflow.horoscopeanswer }}, without including any information about the next sign. Report all the text you find without leaving anything out. The text must contain the paragraphs: love and eros, work and finances, physical well-being and advice of the week."
with 3.5 model
b
oh fabulous
a
what strange things happen hahaha, due to my oversight I pasted the query in Italian under the English one, in this case everything works correctly, I tried to clean the query by putting everything in English or everything in Italian and it truncates the answers... this must be a case study the follow query is the only way it give a correct answare: "Extract the complete horoscope for this week's sign {{ workflow.rispostaoroscopo }} , without including any information about the next sign. Report all the text you find without leaving anything out. Estrai l'oroscopo completo del segno di questa settimana {{ workflow.rispostaoroscopo }}, senza includere alcuna informazione sul prossimo segno. Segnala tutto il testo che trovi senza tralasciare nulla. Il testo deve contenere i paragrafi: amore e eros, lsvoro e finanze, benessere fisico e consigli della settimana." 🤣 🤣