I like some information about the working of the robots.txt file on the pianetadonna platform. There are two things I notice that makes me wonder.
The first thing is that the entries in my robots.txt folder do not seem to work all that well. The entry "Disallow: /search*" should prevent the indexing of search-URL's but according to the Google search console the Googlebot still tries to index these search-URL's. Another similar example would be the entry "Disallow: /*feed*" which should prevent the indexing of all feed-URL's.
I wonder if this could be due to a syntax error in my entries. Alternatively the my question would be if the robots.txt file is working properly.
The second thing I noted is that I get a lot (thousands) of URL exclusions reported by the Google search console. The reason for these exclusions is "Blocked by robots.txt". If I try to inquire further the Google search console states that I do not have a robots.txt. I do have a robots.txt. However, in this robots.txt I do not exclude any pictures.
I wonder if this could be an error in the Google search console reporting. Meaning, maybe the indexing of the pictures is blocked by something else then the robots.txt while the search console reports it as a robots.txt block anyway. Alternatively I wonder if there is a robots.txt file on a higher level on the pianetadonna platform that blocks the indexing of pictures.
I hope you can help me out.