Being able to set a minimum score for chunks that ...
# 👀feature-requests
Hello, as discused in, it would be greate to have a setting for the minimal score to reach for a chunk to be sent to the LLM. Here for example with a simple question about payment instalment, i got this kind of message included, with a score of 0.72. "CRYO GELS selling tips: Home Cryotherapy Experience: With the activating mist, enjoy the refreshing, slimming benefits of cryotherapy right in your own home. Targeting Stubborn Areas: With 1800 rotations per minute, the device works deep down to reduce stubborn fat, whether on thighs, buttocks, stomach or arms. Sculpting & Firming: Combined with the activating mist, the device stimulates the skin, making it more toned and sculpted. Refreshing sensation: The combined use of the activating mist and the device provides a pleasant sensation of freshness, stimulating fat elimination. Nature's advanced technology: Although equipped with cutting-edge technology, this device harmonizes perfectly with natural solutions such as the activating mist." It has literally nothing to do with my question, and it's not even the last chunk included (the last chunks having the worst score). This setting might help reduce token usage. I can provide more examples if needed. Thanks !
I just saw that we can now chose the number of chunks to use from the KB, so i don't know if this feature still makes sense.
It still does. This would be amazing to have
This one is a bit trickier to implement, as the similarity will change drastically from use-case to use-case, and even from inside the same bot! But we have some ideas on how this could be implemented in a transparent way to the users, especially if combined with feedback scoring. The idea would be to keep track of feedback for a given bot and KBs, then train a simplistic model like a linear regression or SVM to determine what the similarity threshold should be based on historical conversations. We could offer a knob to amplify the margin in case of an SVM.