Sometimes the AI and Personality agent rewrites an instruction for a user and slightly changes the meaning of the instructional message for the user slightly to where it could cause confusion. For example, sometimes it greets the user again in the middle of the conversation or adds additional questions at the end of a statement on a node where there is no variable ready to catch it. That happens mainly with the AI task card; it will ask additional questions even if i specify "do not ask any questions"