Botpress crawler blocked by robots.txt
# 🤝help
p
Hey, anyone knows which Crawler Botpress is using? I am encountering a problem where Botpress is not indexing the website correctly and I assume that the crawler is blocked by the websites robots.txt. This is the robots.txt: https://www.contiss.de/robots.txt Is the Crawler indeed blocked or am I on the wrong path?
w
this is similar to an issue I encountered recently : https://discord.com/channels/1108396290624213082/1232079387260747806
p
Thanks for the answer! My issue is more that the website is blocking Botpress from reading the website content. In this post: https://discord.com/channels/1108396290624213082/1235607223095267430 I described it briefly, but I think the root is robots.txt. Since I have contact to the website developer I need to know which one of the blocked Agents in https://www.contiss.de/robots.txt should be unblocked. Therefore I would be very thankful if anyone would know the name of the web scraper of Botpress.
@bumpy-butcher-41910 you know what Scraper Botpress is using? Sorry for referencing you directly, but maybe you know it, because you were involved into the discussion https://discord.com/channels/1108396290624213082/1232079387260747806 🙂