Ahhhhhhhhhhh I see, thanks for the link. Speccing the appropriate number of tokens would definitely need to be done carefully using an average of how extensive conversations can be, limiting or controlling raw user input would go a long way to normalizing this I'm sure