Hey there Remo👋
This behavior is typical of RAG based system, which is what is happening when you query the KB. RAG is really good at searching documents to find specific pieces of information (chunks) to answer targeted questions. However, it is not as good at retrieving and outputting large contiguous sections of text like full chapters or pages word for word. This is because he RAG system breaks documents into smaller chunks, and the language model (LLM) generates responses based on the retrieved relevant chunks. Outputting a full chapter would require stitching many chunks back together coherently, which RAG is not designed to do. Its maybe a bit like asking a search engine to provide an entire book when it's designed to guide you to specific pages, not the best example, but you get the idea.
The model in ChatGPT is trained to interact in a conversational manner, and the model can draw upon a broader range of capabilities to provide you with more extensive information when prompted to. In this context it would be the ability to enerate larger sections of text as a coherent and contextually relevant response, whereas a RAG-based system remains best suited to answer pointed questions with shorter, precise answers.
A possibility would be to store the chapter in a table. Depends a bit on how big the chapter is, but I think its do able. You could then find the relevant record to the question and then list the whole chapter stored in a column.