I'm building a chatbot that uses a client products database. Users can ask for information about products. Sometimes users use the correct product name, and sometimes they make spelling errors. The AI can recognize the errors and present the correct product, well, in most cases.
Is there a way to analyze AI responses over time or during the bot creation process to get the AI's confidence percentage for its answers and correct them when it's wrong?
For example, imagine there are two products: "Amarone" and "Amarone Beta." If a user asks for "Amron," the AI will find both products, each with a confidence score. Currently, I can get the AI's response and classify it in a product array, including the confidence score for each product. What I can't do, and would really appreciate, is to correct the AI when it's wrong. In this actual case, the confidence score is higher for "Amarone Beta" than for "Amarone."
I'd like to be able to correct it for future user interactions by telling the AI that the correct answer should have been "Amarone."