From what I read the Yoast solution is focused on the use of the XML sitemap.
To start with, we do not use an XML sitemap for the tags. So using the Yoast SEO plugin to prevent the use of a...
Type: Posts; User: msp
From what I read the Yoast solution is focused on the use of the XML sitemap.
To start with, we do not use an XML sitemap for the tags. So using the Yoast SEO plugin to prevent the use of a...
Any other suggestions then?
What if I use the .htaccess to do a rewrite in order to redirect the crawlers to the robots.txt in the blog.pianetadonna.it/mysite/ directory. Do you think that could work?
That only works if you do have tags but do not want them to be indexed. It does not work if you do not have tags (anymore) and do not want the crawlers to look for those none existing tags.
Among other things I want to prevent the crawling of feeds and searches. Also we recently decided to remove all tags from our pages. Crawlers tend to have long memories so they keep on trying to...
That leaves us with my questions regarding the working (or rather not working) of the robots.txt on the pianetadonna platform.
We established that the sites on the pianetadonna platform cannot use...
I am officially impressed for you to come up with some obscure message like that :).
While not solving the problem it at least sets my mind at ease.
Thanx,
Gert
Hi,
The problem I try to solve has to do with images and attachments. For now I like to concentrate on the images. Like I said, in the Search Console I have thousands of crawl anomalies with the...
I hope this one will do the trick:
http://tinyurl.com/uznl9mj
The message from the search console is related to the link I provided earlier: https://blog.pianetadonna.it/msp/zuppa-di-cavolo-nero-ceci/finale-tavola-2/
Regarding the general use of the...
p.s. Do you think I could ask questions on the Italian forum in English? My Italian is limited to "mama mia" and some poetic swearing so that would hamper the Italian conversation quite a bit.
Really? I thought that the robots.txt tells a search engine indexing crawler which URL's on a site it is allowed to crawl and which one it is not. With a working robots.txt you could block crawlers...
So this statement seems to be correct.
Which leaves the question:
is this any better?
http://tinyurl.com/srklav5
Do I understand correctly that you setup the pianetadonna platform in such a way that the websites running on this platform are positioned as a subdirectory instead of as a root directory? And that...
https://blog.pianetadonna.it/msp/zuppa-di-cavolo-nero-ceci/finale-tavola-2/
Hi,
I like some information about the working of the robots.txt file on the pianetadonna platform. There are two things I notice that makes me wonder.
The first thing is that the entries in my...