AI Generate Text
# 📖tutorials
q
Prompting strategies for better results when using AI Generate Text or APIs (can be modified to AI Task Instructions also): 1. Include all important variables at the end Especially if they involve a lot of text, instructions, products, etc. Also, give them a title, like in the first image (Code challenge, Code, Corrections, Error). When AI "reads" the prompt, it follows the instructions much better. If the variables are all over the prompt (and they include a lot of text too), the AI might get too confused and drop the ball too early, before it has a chance to fully understand the prompt instructions. 2. Provide enough context Ensure your prompts give enough background information to guide the AI's responses. Lack of context can limit the AI's ability to generate relevant and accurate responses. 3. Avoid irrelevant information Including unnecessary details can confuse the AI or lead to responses that are off-topic. Focus on including only essential information to keep the AI on track. 4. Break complex tasks into simpler steps For complicated requests, decompose the task into smaller, manageable prompts. This helps the AI to process each component effectively and increases the overall quality of the output. 5. Use formatting tools Employ formatting (like Markdown) to structure your prompts clearly. This can help in organizing the request and potentially access better-quality data from the AI’s training. https://cdn.discordapp.com/attachments/1231503419278360576/1231503419546664960/Screenshot_from_2024-04-21_10-14-26.png?ex=663731ef&is=6624bcef&hm=2f12da8df9a1ae4052377cdb21bbb9a2873dfe043a6851233d9b5ee37c7ef518&
6. Repetition for important instructions Repeating key elements within the prompt helps reinforce their importance, ensuring that the AI does not overlook critical details within its limited context window. 7. Role play Assign the AI a specific role (e.g., an expert in a particular field) to align its responses more closely with the expertise required for the task. 8. Add a backstory Adding a backstory or complete use case example can engage the AI in a way that it's more focused on the topic and give better responses. 9. Avoid leading the AI in multiple choice tasks Use examples, but avoid listing options beforehand, that may lead AI to making arbitrary poor choices without proper analysis. 10. Use few-shot over zero-shot prompting Provide examples of desired outputs or scenarios to guide the AI, especially when dealing with complex or niche topics. This helps the AI understand the context better and produce more relevant responses. 11. Chain of thought Encourage the AI to break down complex problems into intermediate steps before arriving at a final answer, which can help clarify the reasoning process and reduce errors. 12. Prompt chaining Sequence your prompts strategically so that each prompt builds upon the previous one, refining the AI’s outputs progressively for complex tasks. 13. Knowledge generation prompting In cases where detailed domain knowledge is required, first prompt the AI to generate background information which can then be used to inform more focused queries. 14. Intermediate prompt engineering techniques Techniques like asking the AI to rephrase a question before answering can help confirm its understanding of the prompt, potentially improving the response accuracy. 15. Try again Don’t be discouraged by failures. Refine and retry prompts to improve accuracy and relevance of AI responses. Each test can offer insights into better prompt structuring.
It would be nice to hear what prompting strategies our @User professionals use, what their best ideas are, what we should add to this list, or what we should remove. Most of these I already use, but there are some new ones that I didn't even know existed, and I'm so eager to start trying them. Like, for example, number 5. I have used titles and capital letters, but I didn't know that if you use # and format it like Markdown https://cdn.discordapp.com/attachments/1231503419278360576/1231507188258705438/Screenshot_from_2024-04-21_10-28-16.png?ex=662611f1&is=6624c071&hm=946ccd61b27854d122d056be126d945081a0253828914eb8a8a095b0aec8d7c8&
it's a 'secret signal' to AI which some scientists and other professionals ALWAYS use and it could potentially connect it to better training data. If that works, it really blows my mind 🧠 🤯
c
Great topic ! The right prompt makes all the difference and can enhance older models to rival or even surpass newer models . I recommend the latest talks from Andrew Ng from DeepMind on RAG systems and general prompting. personally I always iterate on prompts until I get a well structured clear prompt that can be duplicated. However I also noticed from testing that a well structured prompt is no guarantee to get a better result but most of the time it gets you more “bang for the buck”. What are your thoughts on RAG and vector search (pinecone) ?
q
@cold-jewelry-54343 "What are your thoughts on RAG and vector search (pinecone)" That's a good question, and I think it's worth its own tutorial! I have some experience using the Pinecone vector database in some Python AI projects, we had early access for that for blockchain AI projects, before joining the Botpress community. I find Botpress Knowledge Bases even better, especially for tasks that do not require fine-tuning LLM models (which is 95% of my use cases). In my use cases, creating a well-structured knowledge base and using RAG can improve AI responses to coding-related questions. Fine-tuning LLM models like GPT-4 can be expensive, potentially costing hundreds or thousands of dollars. In contrast, fine-tuning open-source models like Llama 3 can be done for under $5. In my coding-related projects, when we create a Botpress Knowledge Base with enough correct data, such as real project code from GitHub, official documents, and JavaScript challenges and solutions, AI uses this information to contextualize its responses more effectively. This approach is particularly useful for coding-related queries involving specific languages or frameworks like JavaScript or React. Direct access to relevant coding challenges, solutions, and real project code examples from GitHub can significantly improve the specificity and accuracy of the responses. This ensures that generated answers are not only theoretically correct but also practically applicable. So in my opinion, when first using RAG (to retrieve the relevant data from the Botpress KB) and then asking AI to solve this coding challenge or even to build this project step by step, AI generates much better responses (than without RAG, only asking AI to solve the task) because now its answers are based on both the retrieved data and its pre-existing knowledge, providing much more precise and relevant answers based on actual data.
c
@quick-musician-29561 ". I find Botpress Knowledge Bases even better" do you mean that the KB's give better results without additional RAG or does the KB already have such tech under the hood ? Or is a traditional KB more suited for specific usecases vs a RAG. I'm still learning in what scenario's a RAG system can be benficial over a traditional KB.
q
In my experience and understanding, Botpress Knowledge Bases and Pinecone vector databases both use the exact same technology under the hood, and both provide equally good results. And one reason I prefer Botpress is because it's much easier to start using quickly without the need for any additional database. Both Botpress KB & Pinecone vector database improve the quality of the responses and solving tasks a lot when relevant information is first retrieved from the KB and given to AI as additional relevant context, and only after that is the AI asked to solve the task. Note that I have experience using this only for coding-related tasks, but because it works so well there, I'm guessing it should work well with most other topics as well. That's why we need to test these a lot, TOGETHER 🦾 🛠️ 🤖
That's what I'm testing here: "Solutions from the knowledge base are different than answers from the knowledge base; LLMs need to create solutions based on the official documents and code examples, even though the exact same tasks are not in the documents." https://discord.com/channels/1108396290624213082/1229502740854472945
l
I'm very new to Botpress. But I've been working on prompting and Pinecone for several months. I REALLY like the results of using AI Generate Text for my prompt. And I haven't done it nearly to the level you describe here. As you mention in this tread, I'm also really enjoying using the Botpress Knowledge Base instead of Pinecone. I'm constantly fighting with calling the right vectors and and cutting up the text. This is just so much easier. I'll definitely be experimenting with your recommendations here. Thank you!
q
That's great to hear! 🔥 Please share with us the wisdom you've learned from prompting and working with Pinecone for several months, as we build Botpress projects here together.
l
You are such a wonderful ambassador. Thank you for being so kind and inviting and positive. I definitely will.
but I'm still doing my best to help other bot builders with their projects and inspire them to achieve great things.
h
BotPress knowledge base is fine. It’s just that it would be so much faster if some genius here figured out how to use Groq API and Pinecone as RAG to feed the knowledge together with the prompt to Groq API that’s the fastest LLM API available. And Pinecone a vector based knowledge base. I hope someone figures this out and let us know how to use with BotPress.
r
@quick-musician-29561 Hello Dev I need some help
q
m
He'd show us up, so we are boycotting him
q
I was wondering what the reason is there's not so many comments here, even though I tried summoning all Botbassadors
Nobody cares about my ideas.
m
HAHAHA, were tired, and yeah honestly I think a few people are off / super busy right now its been quiet in the botbassador caht too
c
afaik the Knowledge Base is already using vectorization methods to retrieve data but I agree that it would be great if we have more options for influencing search methods (how it finds its nearest neighbor) (including options like chunk size, chunk PDF per page). While we can send data to Groq for processing we can only send selective knowledge over to Groq/external LLM's which is not ideal. To get the full power of these LLMs they would need to get direct access to the Knowledge Base within Botpress which is something only the official team can implement.
4 Views