.Google's John Mueller answered a concern concerning why Google marks webpages that are actually prohibited coming from crawling by robots.txt and why the it's risk-free to disregard the relevant Search Console documents regarding those crawls.Bot Visitor Traffic To Question Guideline URLs.The individual inquiring the concern recorded that bots were actually creating links to non-existent query criterion Links (? q= xyz) to webpages with noindex meta tags that are also blocked out in robots.txt. What prompted the concern is that Google.com is actually creeping the links to those web pages, acquiring shut out through robots.txt (without noticing a noindex robotics meta tag) at that point getting reported in Google.com Look Console as "Indexed, though obstructed by robots.txt.".The person asked the observing inquiry:." But here is actually the large concern: why would Google index pages when they can't also view the web content? What is actually the perk during that?".Google's John Mueller validated that if they can not crawl the web page they can not observe the noindex meta tag. He additionally makes a fascinating acknowledgment of the site: hunt driver, encouraging to overlook the results because the "ordinary" individuals will not view those end results.He created:." Yes, you're appropriate: if our team can't creep the web page, we can not observe the noindex. That stated, if our company can't crawl the pages, at that point there is actually not a whole lot for our company to index. So while you could view some of those web pages along with a targeted web site:- query, the common user will not view all of them, so I wouldn't bother it. Noindex is also fine (without robots.txt disallow), it merely indicates the Links are going to wind up being crept (and also find yourself in the Look Console document for crawled/not recorded-- neither of these statuses induce issues to the rest of the internet site). The essential part is actually that you do not create all of them crawlable + indexable.".Takeaways:.1. Mueller's answer validates the constraints in using the Site: search progressed hunt driver for analysis factors. Some of those causes is because it's not hooked up to the regular search mark, it is actually a distinct thing completely.Google's John Mueller talked about the site search driver in 2021:." The short answer is that an internet site: query is actually not suggested to become total, nor made use of for diagnostics functions.A site question is actually a particular sort of hunt that limits the outcomes to a specific site. It is actually primarily only the word site, a colon, and afterwards the website's domain name.This query confines the results to a specific site. It is actually certainly not meant to become a complete selection of all the web pages coming from that web site.".2. Noindex tag without using a robots.txt is actually fine for these type of circumstances where a robot is actually connecting to non-existent webpages that are obtaining discovered through Googlebot.3. Links along with the noindex tag will definitely produce a "crawled/not listed" item in Search Console which those won't possess an adverse effect on the rest of the website.Review the question as well as respond to on LinkedIn:.Why would certainly Google.com mark web pages when they can not also view the content?Featured Photo through Shutterstock/Krakenimages. com.