So I check to see how Google explains it and this is what I read:
Google was unable to crawl the URL due to a robots.txt restriction. This can happen for a number of reasons. For instance, your robots.txt file might prohibit the Googlebot entirely; it might prohibit access to the directory in which this URL is located; or it might prohibit access to the URL specifically. Often, this is not an error. You may have specifically set up a robots.txt file to prevent us from crawling this URL. If that is the case, there's no need to fix this; we will continue to respect robots.txt for this file.If a URL redirects to a URL that is blocked by a robots.txt file, the first URL will be reported as being blocked by robots.txt (even if the URL is listed as Allowed in the robots.txt analysis tool).
What exactly does this all mean?
My choices to learn are:
I am determined to learn it today somehow.