Bug - ChatGPT massive behavioral changes
# 🤝help
b
Hello there, Since there is no Bug channel I report this here. Since the last Gpt Turbo update aka yesterday (at least that's when I started to see this) some AITask's behavior has changed a lot, to the point where they become very weird. Note that I have only observed this bug using French and it doesn't appear to happen when using English (need more testing). My best example is this: - The user asks a question/problem - The user's message is passed to an AITask supposed to understand if the problem is related to IT or not, then if it is passed true to a boolean. Note that before the update this prompt and AITask worked perfectly in many scenarios. Now what happens is that depending on the way the words are written the ai will or will not understand the sentence. For example, these sentences have all the same meaning and are all related to IT (the difference is only in the word "demarre"): "Mon ordinateur ne demare plus" will return false "Mon ordinateur ne demarre plus" will return true "Mon ordinateur ne démarre plus" will return false The difference is simply 1 letter and 1 accent. Note that the 3rd sentence is the one perfectly written, yet it does not work while the one without an accent will work. I don't understand how the change from gtp-3.5-turbo-0301 to gtp-3.5-turbo-0613 is related to this, since it is only supposed to introduce a new variable feature. But the timing seems too perfect.
a
Hey @bulky-dusk-45106, thanks for bringing this up. Are your AI task instructions in English or French? And have you set your bot language to French or used the translator agent?
b
My instructions are in English and I use the translator agent hard set to french. I did further testing and I weirdly enough, passing the same exact sentence multiple times doesn't yield the same result each time.
a
that is interesting! Do you get more consistent results if your AI task instructions are in French?
b
Nvm the last part, I forgot to update a prompt in my test loop. The results are consistent with the same message. As for prompting in French it yields generally worst results. I think I have found a prompt that work in this example, but I find it strange how the old prompt suddenly stopped working.
a
yes, that's a risk we take using third-party foundational models. I know we have plans to incorporate other foundational models in the future, but I'm not sure if that would include past versions of models. It would make a good post in #1111026806254993459 !
b
I agree I will do that right now. Because while in development this is fine, if the bot was live in production it would be very bad to have a forced update messing up the tasks
The ai tasks are working a lot worse, the AI seems laking understanding compared to before
3 Views