Use Groq API, it's fast
# 📖tutorials
f
This was inspired by a video @gentle-engine-13076 sent about Groq in #1121494527727902891
Copy code
javascript
const GROQ_API_KEY = 'API Key';

const data = {
  messages: [
    {
      role: 'user',
      content: 'Explain the importance of low latency LLMs',
    },
  ],
  model: 'mixtral-8x7b-32768',
};

async function getGroqMessage() {
  try {
    const response = await axios.post('https://api.groq.com/openai/v1/chat/completions', data, {
      headers: {
        Authorization: `Bearer ${GROQ_API_KEY}`,
        'Content-Type': 'application/json',
      },
    });

    workflow.groqMessage = response.data.choices[0].message.content;
    return workflow.groqMessage;
  } catch (error) {
    console.error(error);
    throw error;
  }
}

await getGroqMessage()
  .then((message) => {
    console.log('Groq message:', message);
  })
  .catch((error) => {
    console.error('Error:', error);
  });
[Docs](https://console.groq.com/docs/quickstart) [Models you can use](https://console.groq.com/docs/models) [API Key](https://console.groq.com/keys) This is not optimized at all, just what I could quickly put together while in class. Planning on updating this, and the [Claude-3 tutorial](https://discord.com/channels/1108396290624213082/1214664239890047106) this weekend.
g
@quick-musician-29561 @wide-oyster-38514 ⚡️ Enjoy the new toys gents!
w
Let's gooo, @fresh-fireman-491 you are a legend 🐐
Maybe you can do my taxes for me one day as well
q
🔥 🔥 🔥
Wow, that was fast Decay!
g
Copy code
const GROQ_API_KEY = env.GROQ_API_KEY;

const data = {
  messages: [
    {
      role: 'user',
      content: `${event.preview}`,
    },
  ],
  model: 'mixtral-8x7b-32768',
};

async function getGroqMessage() {
  try {
    const response = await axios.post('https://api.groq.com/openai/v1/chat/completions', data, {
      headers: {
        Authorization: `Bearer ${GROQ_API_KEY}`,
        'Content-Type': 'application/json',
      },
    });

    workflow.groqMessage = response.data.choices[0].message.content;
    return workflow.groqMessage;
  } catch (error) {
    console.error(error);
    throw error;
  }
}

await getGroqMessage()
  .then((message) => {
    console.log('Groq message:', message);
  })
  .catch((error) => {
    console.error('Error:', error);
  });
a couple milliseconds faster setup ☺️
PS, btw... specs on the models available via Groq API: Model Name: LLaMA2-70b-chat Context Window: 4,096 tokens API String:
llama2-70b-4096
Model Name: Mixtral-8x7b-Instruct-v0.1 Context Window: 32,768 tokens API String:
mixtral-8x7b-32768
Model Name: Gemma-7b-it Context Window: 8,192 tokens API String:
gemma-7b-it
f
Thank you for adding that @gentle-engine-13076 ! Planning on building a simple QnA with this later, just to see how fast the conversation can get, maybe even with a React app instead of the native webchat
g
the Groq discord server has some useful info. I'd share it here, but it'd get (it's linked on their site)
f
Thought we could hide it
g
LOL told u
f
Yea
As the code
g
you'll need some new tricks to get around the sentries 🛡️
f
Can't edit old messages to contain it either 😦
Link is discord.gg/ groq
That works
q
Super! 🛠️ 💎 🫡
Thank you both! 🦾 🤖 🚀
g
Is Groq's API free to use ?
c
Sorry guys! That's the only way we can block people trying to send these weird Discord invites.
Good work @fresh-fireman-491 & @gentle-engine-13076 ! 🔥 🚀
f
I haven't added a credit card.
"The team is working on introducing paid tiers with stable and increased rate limits in the near future."
Simple workflows for each model
h
hey guys great tutorials you been sharing lately and I should make time this weekend to catch up and learn from your builds because I have been occupied lately and thank you @gentle-engine-13076 for sharing about Groq and @fresh-fireman-491 for showcasing
f
That sounds good! Let me know if you run into any problems
a
thanks for the code. I have tested with all LLM models and works pretty good but no real-time data access 😦
f
You are very welcome 😁 Are you talking about streaming? If so, then Devmik made a tutorial on that https://discord.com/channels/1108396290624213082/1211400458329923636
a
not just streaming but getting information from internet. For example: I want to know what concerts will take in Malaga next week
f
Ah okay. Not sure if the models can do that alone, but there are some 3rd party software that you can use
a
Yeah, My startup needs access to real-time data that's why I am so interested in finding the best LLM+RAG model. If anyone needs something similar I will say that, in my honest opinion, best LLM I have found is Cohere (command-r). They have a free version that it's amazing to test your solution (maybe if you are not looking for a massive solution you can just use free api key). This is a real example of my chatbot: -User: I'm looking for restaurants to eat grilled meat in Malaga - Bot: Main Options: Iñaki Grill Location: Calle Ayala, 38. 29002 Málaga Unique Features: Specializing in select meats cooked on the barbecue in their charcoal oven above. Practical Details: Hours: Monday to Sunday from 1 p.m. to 11 p.m. Ovid Grill Location: C/Denis Belgrano, 11 and C/ La Bolsa 3. 29015 Málaga Unique Features: Family restaurant with a display of high-quality meats. Practical Details: Hours: Every day from 1:00 p.m. to 4:30 p.m. and from 8:00 p.m. to 12:00 p.m. (This is just a sample of the output, there are more) No hallucinating output, no fake results. You can use a custom prompt for Cohere LLM so you can customize the behavior of the output. I recommend to try it because last months I have tested every LLM I have found and this is, by far, best solution for me. Hope this can help anyone. Thanks!
g
Awesome! I needed to sleep, but I wanted to do that last night. ☺️ Today I added a little bit: ability to allow users add system prompt via chat and a little bit of copy edits ✨ (I also added notes and turned off the knowledge and summary agents too since they cost AI spend) https://cdn.discordapp.com/attachments/1220643404833230898/1220863423865294968/Better_Call_Groq_API.bpz?ex=66107caa&is=65fe07aa&hm=7fe403c4b9405d390865080c38fdbbb05beb14f849266e6ead259c402a4a0e83&
q
Thanks for sharing that! 🫡 What kind of real-time data do you need to access from your chatbot? 🛠️
f
Great idea, thank you!
q
We've used Botpress chatbots along with other LLMs and tools when fetching the latest news about a topic (which should be close to your use case since 'concerts in Malaga next week' is public information). Here, I first did a web search, which I then fed to the Assistants API. You can do the same using Botpress only, which I've recommended in many of my Assistants API tutorials (when Botpress was better and faster). Web search:
Copy code
json
{
  "name": "duckduckgo_search",
  "description": "Fetch news results and organic results ",
  "parameters": {
    "type": "object",
    "properties": {
      "query": {
        "type": "string",
        "description": "Search query"
      }
    },
    "required": [
      "query"
    ]
  }
}
https://discord.com/channels/1108396290624213082/1179076516169138196 Another way to get live data is to connect the chatbot to external APIs; we have examples of those in the #1120796649686573086. I fetched data to another chatbots we have shared there by asking, 'What's the weather like in Stockholm today/next week?' and it always gives the correct answers using the real-time data from the weather API. https://discord.com/channels/1108396290624213082/1178590925421821982 A third way we've previously used to connect Botpress chatbots to real-time data is through Make.com. If you have the data on a platform that Make.com can easily connect to, like Google Sheets, for example. https://discord.com/channels/1108396290624213082/1217840226169262170 Using Botpress, SERP API (a really good and customizable web search for 'news only', 'images', 'latest videos' etc.), other external APIs to access live information, other LLMs and Make.com together, we can automate and connect to almost any real-time data using Botpress chatbots. With these new tools @gentle-engine-13076 🦸‍♀️ @fresh-fireman-491 🦸‍♂️ @agreeable-accountant-7248 🦸🏻‍♂️ have shown us, our chatbots are once again going to be on another level 🚀
a
Wow, excellent explanation about different aproachs to get info from internet. As I see this is something that can be useful for more people I will go deeper into what I am doing with my chatbot (it's integrated with WhatsApp): 1) User sends a question 2)** I use AI Task with this prompt**: You are AMALia, an artificial intelligence specialized in Malaga whose objective is to help users find the best plans, restaurants, activities, museums, etc. Your goal is to maintain a natural conversation with the user to obtain information about what they are looking for. I will provide you with the content of the conversation, the user's first message and the user's last message You must create a new "wizard" and continue the conversation with the user unless the user does not want to continue. - The message should be as friendly as possible. - You have to ask questions related to the conversation - Respond in 50 words or less. 3) I send the output of preivous AI task to another AI task to create a prompt with all the info from user I'm going to give you the recent conversation history with the user. I want you to analyze the history and summarize, in one sentence, what the user wants to search for. If the transcription includes references to areas of Malaga or municipalities, you must include them in the prompt. If no reference is made to location, it always uses the city of Malaga by default.
4)**Send the request to Cohere API:** const data = { message: bot.prompt, connectors: [{"id":"web-search"}], stream: true, promptTruncation: 'OFF', model: 'command-r', citation_quality: 'accurate', preamble: 'Set today as the system date: '+bot.currentdate+'. You are a personal assistant specialized in content from the city of Malaga. You should only search for answers and results related to Malaga. Always use the Spanish language. You cannot search for information that is not from Malaga. If the user asks about something that is not related to the province of Malaga, tell them that you cannot answer. Do not show any results that are not from Malaga', temperature:0 }; await axios.post('https://api.cohere.ai/v1/chat', data, { headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer XXXXXXXXXXXXXXXXX' } }) .then((response)=> { bot.respuestaIA = response.data }) .catch((error) => { console.error(error); })
5) The most difficult part for me was parsing response from API because it retuns incredible amount of data. I use this prompt: I'm going to give you an LLM's answer to a user's question. When you receive a query, organize your response following this outline, adapting it according to the topic of interest (restaurants, activities, places of interest, etc.): Recommended: Simplified list of best options or highlights. For each one, provide: - Name: Clear and concise. - Location: Specific, if applicable. - Unique Features: One or two sentences about what makes it special. - Information: Schedules, prices, brief recommendations. Other Options to Consider: - Name and Location: For quick identification. - Brief Description: A sentence about each option. - Practical Details: Key information summarized. Make sure your answer is direct, to the point, and easy to read, using bullet points or numbered lists to organize information. The goal is for the user to quickly get an overview and actionable details without having to filter through too much text.
6) Show info to the user and send both question from user and answer from Cohere to AirTable via Make.com thanks to @fresh-fireman-491 implementation As my chatbot is for massive use (B2C) I need to reduce costs. With this aproach, each full interaction from the user costs aprox 0.1-0.2 cents. This aproach is just for testing phase and not to invest a lot of money. Then I have developed another aproach to feed my chatbot with information directly from the people (restaurants recomendations, places to see, etc) so I will use a Knowledge Base aproach. That will provide to the users real information from the community. Only in case I don't find accurate info from my own Knowledge Base I will use the API aproach. Hope this can be useful for building better chatbot. Thanks for all the support!!
q
🦸🏻‍♂️ 💎 🫡
Awesome use case and solution @agreeable-accountant-7248 !!
f
Great messages, both of you! I was thinking of the Serp API "but there are some 3rd party software that you can use", glad that you could remember the name of it @quick-musician-29561 Thanks a lot for sharing all of that @agreeable-accountant-7248 but what is the main difference of using Cohere API compared to using Search the web in Botpress?
a
Hi Decay. At the beginning I only used Knowledge Bases and Search the web in Botpress. I started to explore different options because two main reasons: - Price - Quality of data As I mentioned before my chat is created for a massive use so I need to minimize the cost of each question from user. Even using chatgpt 3.5 for ai task the cost of using knowledge bases made me spend about 0,20 cnt for each question (now 0.01 with free api key from Cohere) Also there is a difference in the quality of data obtained from search the web and from Cohere. This is the output from same user question: - User question: The user is looking for a grilled meat restaurant in Malaga. You want it to be located in the center of Malaga and you don't want to spend more than €50 per person. No preference on the type of meat or the atmosphere of the restaurant - Botpress: With that query Raw Input does not recognise a question: Skipping KB: turn message has no question {"scope":"agent-hook","agentId":"KnowledgeAgent","hookName":"conversation_turn_started"}
- Cohere: Here are some of the best restaurants to get meat grilled to perfection in the centre of Málaga. *Pampa Grill Málaga - Argentine restaurant specialising in a variety of grilled meats including chicken, tender steaks and empanadas. - Located in the city centre at Calle Sánchez Pastor, 10. - Price: €€ - $$$ Asador Iñaki - Located near María Zambrano Station, this restaurant offers a wide variety of meats including beef, cows and bulls of different ages, maturation, breeds and origins. - Their menu also caters to those who want seafood. - Price: €€ - $$$ El Farolito - This restaurant offers a range of grilled meats including steaks, and potatoes. - It is located at Calle Beatas, 14. - Price: €€ - $$$ Papulinos - Located in three areas in Málaga (Calle Malasaña, 42, Calle Don Cristian, 41 and Calle Correo de Andalucía), this restaurant serves excellent grilled meat. - The restaurant also has a large terrace. - Price: Around €€ Sabor a Fuego - Sells a variety of delicious grilled dishes, using fresh ingredients. There's something on the menu for every taste. - According to a review on their website, their bife de chorizo is "a succulent manjar". - Price: €€ Asador Ovidio - Family-run restaurant located in the city centre that serves a wide variety of grilled meats including retinta, Black Angus and Galician blonde beef. - They also have a wine list with over 100 different wines. - Price: €€ - $$$ Imperio Grill - This Brazilian steakhouse serves a range of hearty grilled steaks and kebabs, along with sides including homemade chips. - Price: €€ - $$$
f
Yea okay, I can see that. Sitting down to explore it now
g
Google Places API and Yelp would be handy in this use case. Using AI to fill variables needed to create inputs and format the output. https://developers.google.com/maps/documentation/places/web-service https://docs.developer.yelp.com/docs/getting-started Yelp has a free public API and Google offers pay-as-you-go pricing with free $200/month credit.
Circling back to Groq, I noticed a tweet saying it’s available on FlowiseAI 😱 too! Speedy custom RAG
a
FlowiseAI is consumible via API?
g
Yes! It's a tool I know @ Goose and @ aisimp use in their Botpress workflows (I'm not gonna fully ping them right now, but those are their handles in here if you have deeper questions —Goose checks in regularly) Check out tutorials using FlowiseAI 📺

https://youtu.be/7p3foNlykJc?feature=shared&t=715

c
@fresh-fireman-491 How would we let Groq access our KnowledgeBase ? great demo btw, getting excited about using Groq in production 🚀
f
We would have to wait for Botpress to implement this. You can set it up to use the chunks from the KB, but it would be twice the costs. You can also set it up outside of Botpress.

https://www.youtube.com/watch?v=QE-JoCg98iU

h
Does anyone have a video to show how to implement Grok API to BotPress?
Oh wow! Watched this video and it’s super cool! Can I use Pinecone as my knowledge base and feed it to Grok API and have that integrated with BotPress? How?
f
Probably yes, but I am not an expert in that field. Feel free to create a post in #1111009377525186570
h
@gentle-engine-13076 do you know how to use Pinecone as a RAG together with the Groq API with BotPress? That would be the ultimate solution.
I made it work with Groq 😄 Thanks @fresh-fireman-491 @gentle-engine-13076 To make it ultimate, RAG + Groq would be a fantastic move!
f
Amazing, nice work!
g
I replied to you there. Idk if it's what you're hunting for, but I shared my ✌🏼cents.
I just saw that Groq has depreciated LLaMA2 and added both versions of LLaMA3 70b + 8b, so I updated the bot file accordingly. ✌🏼🫶🏼 https://cdn.discordapp.com/attachments/1220643404833230898/1233599653077844038/Better_Call_Groq_API_-_2024_Apr_26.bpz?ex=662daeb4&is=662c5d34&hm=96676a14b63497d08002803a31df5b6c46b28b74528c3ee7879ee143fce5d5ec&
f
You are amazing🦸
a
Next step: use Llama3 with bot press query knowledge base for an in-house RAG 🤞
f
That gave me a good idea! I sent this to the Botpress team:
Copy code
Idea, which I think would be absolutely amazing!💎 
Add another model strategy, which would be to use our own model. This would just return the chunks, and then we could handle them as we wish.
This would be great as we could use any model, that is running any where, with our own prompt!
Let me know what you think
a
Now that there are so many llms should be great enable users choose their own LLM inside botpress. I have 4 different api integrations with groq, perplexity, cohere and claud3 because each one offers me different things. Being able to choose what llms use against botpress knowledge base would be a killer functionality in my opinion as a lot of people is looking for rag solution with their own data but without the cost of open ai
f
Would be amazing. When you say "integrations" do you mean custom build integrations like the webchat and WhatsApp integration or do you mean using them in a flow?
a
I mean just the api call not an integration as WhatsApp or firebase. But now that groq is free and llama3 works really fast we could have one the best rag systems in botpress as you can also synchronize knowledge base with external tools 😊
f
Alrighty. Would you prefer to do the API calls as you are doing now or have an integration where you could easily call all APIs that just use token based auth?
a
I think it would be a template for different llms as most of them uses same parameters in the APIs call. For me most important part is chat history as some of them need a particular schema : system-user-assistant-user etc But if I could choose I would prefer capability of using knowledge base query with a different LLM than open ai tbh, api calls are not such a complex things 😊
f
Alrighty, thank you!
a
Thank you for your work and you time 😊
f
I read this as you hadn't implemented the chat history in the API calls 🙂 I hope I was right, but let me know 🙂 In case I was I did some testing to use the transcript from the summary agent to add history to the API call with that schema. This should work for most LLMs, but else it shouldn't be too hard to change. Its made for OpenAI which is what I used to test, but it should also work with Claude etc. Just sharing the bot here and I will do a tutorial on it after I have gotten something to eat🍜
11 Views