Google Not Indexing XML Sitemap Images
-
Hi Mozzers,
We are having an issue with our XML sitemap images not being indexed.
The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are.
The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt.
There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml.
Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load.
To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1.
Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed?
The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue.
Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough.
Thank you for taking the time to read and help.
Regards,
Mark
-
Hi Mark,
I'm just following the thread as I have a similar problem. Would you mind sharing your results from the tests?
Thanks,
Bogdan -
Thanks Everett - that's exactly what I intend to do.
We will be testing two new sitemaps with 100 x URLs each. 1. With just the file extension removed and 2. With the entire cropping part of the URL removed, as suggested by Matt.
Will be interested to see whether just one or both of the sitemaps are successful. Will of course post the outcome here, for anyone who might have this problem in future.
-
It isn't always that simple. Maybe commas don't present a problem on their own. Maybe double file extensions don't present a problem on their own. Maybe a CDN doesn't present a problem on its own. Maybe very long, complicated URLs don't present a problem on their own.
You have all of these. Together, in any combination, they could make indexation of your images a problem for Google.
Just test it out on a few. Get rid of the file extension. If that doesn't work, get rid of the comma. That is all you can do. Start with whatever is easiest for the developer to implement, and test it out on a few before rolling it out across all of your images.
-
Cheers for that mate - especially the useful Excel formula.
I am going to try a few things in isolation so that we can accurately say which element/s caused the issue.
Thanks again, mate.
-
Ignore the developer - what worked for one doesn't mean it'll work for you
The easiest way to test this is to manually create a sitemap with 100 or so 'clean' image URLs. Just pull the messy ones into excel and use the formula below to create a clean version (Use A1 for messy, B1 for formula).
Good luck mate.
=CONCATENATE("image:imageimage:lochttp://res.cloudinary.com/greenplantswap/image/upload/",RIGHT(A1,LEN(A1)-(FIND("",(SUBSTITUTE(A1,"/","",(IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1,"/",""))))))))),"</image:loc></image:image>")
-
Thanks for the responses guys, much appreciated.
In terms of the commas, that was something that I put to the developer, however he was able to come back with examples where this has clearly not been an issue - e.g. apartable.com have commas in their URLs and use the same CDN (Coudinary).
However, I agree with you that double file extension could be the issue. I may have to wait until next week to find out as the developer is working on another project, but will post the outcome here once I know.
Thank you again for the help!
-
Hello Edlondon,
I think you're probably answering your own question here. Google typically doesn't have any problem indexing images served from a CDN. However, I've seen Google have problems with commas in the URL at times. Typically it happens when other elements in the URL are also troublesome, such as your double file extension.
Are you able to rename the files to get rid of the superfluous .jpg extension? If so, I'd recommend trying it out on a few dozen images. We could come up with a lot of hypothesis, but that would be the one I'd test first.
-
Hmmm I step off here, never used cloudinary.com or even heard of them. I personally use NetDNA, with pull zones (which means that they load the image/css/js from your origin and store a version on their servers) while handling cropping/resizing from my own end (via PHP and then loading that image, example: http://cdn.fulltraffic.net/blog/thumb/58x58/youtube-video-xQmQeKU25zg.jpg try changing the 58x58 to another size and my server will handle the crop/resize while NetDNA will serve it and store for future loads).
-
Found one of the sites with the same Cloudinary URLs with commas - apartable.com
See Google image results: https://www.google.co.uk/search?q=site:apartable.com&tbm=isch
Their images appear to be well indexed. One thing I have noticed, however, is that we often have .jpg twice in the image URL. E.g.:
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574983/oyfos82vwvmxdx91hxaw**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574989/s09cv3krfn7gbyvw3r2y**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_407,q_80,w_407/v1352575010/rl7cl4xi0timza1sgzxj**.jpg**
Wonder if that is confusing Google? If so, none of this is consistent, as they do have a few images indexed with exactly the same kind of URL as those listed above.
-
Thought I had them on email but must be within our fairly cumbersome Skype thread... let me have a dig through when I get chance and I'll post them up here.
-
Hmmmm, okay... Could you post the examples they gave, and an example page where the images are located on the site?
-
Hi Matt,
Thought I should let you know that (i) the X-Robots-Tag was not set, so that's not the issue and (ii) the URLs, although ugly, are not the issue either. We had a couple of examples of websites with the same thing (I'm told the commas facilitate on-the-fly sizing and cropping) and their images were indexed fine.
So, back to the drawing board for me! Thank you very much for the suggestions, really do appreciate it.
Mark
-
Hmm interesting - we hadn't thought of the X-Robots-Tag http header. I'm going to fire that over to the developer now.
As for the URLs, they are awful! But I am told that this is not a problem - but perhaps this is worth re-chasing up as other solutions have, so far, been unfruitful.
Thanks for taking the time to help, Matt - I'll let you know if that fixes it! Unfortunately it could be another week before I know, as the developer is currently working on another project so any changes may be early-mid next week.
Thanks again...
-
This is a bit of a long shot but if the files have been uploaded using their API it may have been that the 'X-Robots-Tag' http header is set to no-index...
Also, those URLs don't look great with the commas in them. Have you tried doing a small subset that just has the image id (e.g. http://res.cloudinary.com/greenplantswap/image/upload/nprvu0z6ri227cgnpmqc.jpg)?
Matt
-
Hi Federico,
Thanks very much for taking the time to respond.
To answer your question, we are using http://cloudinary.com/. So, taking one of the examples from the XML sitemap I posted above, an example of an image URL is http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720.jpg,g_center,h_900,q_80,w_900/v1352575097/nprvu0z6ri227cgnpmqc.jpg (what a lovely URL!).
I had a look at http://res.cloudinary.com/robots.txt and it seems that they are not blocking anything - the disallow instruction is commented out. I assume that is indeed the robots.txt I should be looking at?
Assuming it is, this does not appear to get to the bottom of why the images are not being indexed.
Any further assistance would be greatly appreciated - we have 17k unique images that could be driving traffic and this is a key way that people find our kind of website.
Thanks,
Mark
-
Within that robot.txt file on the CDN (which one are you using?) have you set to allow Google to index them?
Most CDNs I know allows you to block engines via the robots.txt to avoid bandwidth consumption.
In the case you are using NetDNA (MaxCDN) or the like, make sure your robots file isn't disallowing robots to crawl.
We are using a CDN too to deliver images and static files and all of them are being indexed, we tested disallowing crawlers but it caused a lot of warnings, so instead we no allow all of them to read and index content (is a small price to pay to have your content indexed).
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Staging website got indexed by google
Our staging website got indexed by google and now MOZ is showing all inbound links from staging site, how should i remove those links and make it no index. Note- we already added Meta NOINDEX in head tag
Intermediate & Advanced SEO | | Asmi-Ta0 -
Google does not want to index my page
I have a site that is hundreds of page indexed on Google. But there is a page that I put in the footer section that Google seems does not like and are not indexing that page. I've tried submitting it to their index through google webmaster and it will appear on Google index but then after a few days it's gone again. Before that page had canonical meta to another page, but it is removed now.
Intermediate & Advanced SEO | | odihost0 -
Google Indexing & Caching Some Other Domain In Place of Mine-Lost Ranking -Sucuri.net Found nothing
Again I am facing same Problem with another wordpress blog. Google has suddenly started to Cache a different domain in place of mine & caching my domain in place of that domain. Here is an example page of my site which is wrongly cached on google, same thing happening with many other pages as well - http://goo.gl/57uluq That duplicate site ( protestage.xyz) is showing fully copied from my client's site but showing all pages as 404 now but on google cache its showing my sites. site:protestage.xyz showing all pages of my site only but when we try to open any page its showing 404 error My site has been scanned by sucuri.net Senior Support for any malware & there is none, they scanned all files, database etc & there is no malware found on my site. As per Sucuri.net Senior Support It's a known Google bug. Sometimes they incorrectly identify the original and the duplicate URLs, which results in messed ranking and query results. As you can see, the "protestage.xyz" site was hacked, not yours. And the hackers created "copies" of your pages on that hacked site. And this is why they do it - the "copy" (doorway) redirects websearchers to a third-party site [http://www.unmaskparasites.com/security-report/?page=protestage.xyz](http://www.unmaskparasites.com/security-report/?page=protestage.xyz) It was not the only site they hacked, so they placed many links to that "copy" from other sites. As a result Google desided that that copy might actually be the original, not the duplicate. So they basically hijacked some of your pages in search results for some queries that don't include your site domain. Nonetheless your site still does quite well and outperform the spammers. For example in this query: [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 But overall, I think both the Google bug and the spammy duplicates have the negative effect on your site. We see such hacks every now and then (both sides: the hacked sites and the copied sites) and here's what you can do in this situation: It's not a hack of your site, so you should focus on preventing copying the pages: 1\. Contact the protestage.xyz site and tell them that their site is hacked and that and show the hacked pages. [https://www.google.com/search?q=](https://www.google.com/search?q=)%22We+offer+personalized+sweatshirts%22%2C+every+bride#q=%22GenF20+Plus+Review+Worth+Reading+If+You+are+Planning+to+Buy+It%22 Hopefully they clean their site up and your site will have the unique content again. Here's their email [email protected] 2\. You might want to send one more complain to their hosting provider (OVH.NET) [email protected], and explain that the site they host stole content from your site (show the evidence) and that you suspect the the site is hacked. 3\. Try blocking IPs of the Aruba hosting (real visitors don't use server IPs) on your site. This well prevent that site from copying your site content (if they do it via a script on the same server). I currently see that sites using these two IP address: 149.202.120.102\. I think it would be safe to block anything that begins with 149.202 This .htaccess snippet should help (you might want to test it) #-------------- Order Deny,Allow Deny from 149.202.120.102 #-------------- 4\. Use rel=canonical to tell Google that your pages are the original ones. [https://support.google.com/webmasters/answer/139066?hl=en](https://support.google.com/webmasters/answer/139066?hl=en) It won't help much if the hackers still copy your pages because they usually replace your rel=canonical with their, so Google can' decide which one is real. But without the rel=canonical, hackers have more chances to hijack your search results especially if they use rel=canonical and you don't. I should admit that this process may be quite long. Google will not return your previous ranking overnight even if you manage to shut down the malicious copies of your pages on the hacked site. Their indexes would still have some mixed signals (side effects of the black hat SEO campaign) and it may take weeks before things normalize. The same thing is correct for the opposite situation. The traffic wasn't lost right after hackers created the duplicates on other sites. The effect build up with time as Google collects more and more signals. Plus sometimes they run scheduled spam/duplicate cleanups of their index. It's really hard to tell what was the last drop since we don't have access to Google internals. However, in practice, if you see some significant changes in Google search results, it's not because of something you just did. In most cases, it's because of something that Google observed for some period of time. Kindly help me if we can actually do anything to get the site indexed properly again, PS it happened with this site earlier as well & that time I had to change Domain to get rid of this problem after I could not find any solution after months & now it happened again. Looking forward for possible solution Ankit
Intermediate & Advanced SEO | | killthebillion0 -
Google not Indexing images on CDN.
My URL is: http://bit.ly/1H2TArH We have set up a CDN on our own domain: http://bit.ly/292GkZC We have an image sitemap: http://bit.ly/29ca5s3 The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: http://bit.ly/29eNSXv. We used to have a disallow to /thumb/ which had a 301 redirect to our CDN but we removed both the disallow in the robots.txt as well as the 301. Yet, GWT still reports none of our images on the CDN are indexed. The above screenshot is from the GWT of our main domain.The GWT from the CDN subdomain just shows 0. We did not submit a sitemap to the verified subdomain property because we already have a sitemap submitted to the property on the main domain name. While making a search of images indexed from our CDN, nothing comes up: http://bit.ly/293ZbC1While checking the GWT of the CDN subdomain, I have been getting crawling errors, mainly 500 level errors. Not that many in comparison to the number of images and traffic that we get on our website. Google is crawling, but it seems like it just doesn't index the pictures!? Can anyone help? I have followed all the information that I was able to find on the web but yet, our images on the CDN still can't seem to get indexed.
Intermediate & Advanced SEO | | alphonseha0 -
Organic Listings showing Google Tag Manager + Google Page Title...?
I'm a bit stumped with this. I optimise all my titles etc for Australia - and now the organic liatings are showing something strange. For example ( we sell health supplements ) Meta title = "My Product , Buy Online Australia" If I type "My Product" - the title in the organic listings says "My Product - My Company Limited" - and the only place I can see it getting that from is a combination of Meta Data used in Google Tag Manager + the Name on my Google places page. This is much more obvious for categories.. but it's a pain in the butt. If I type "My Product Australia" Then the original "My Product , Buy Online Australia" comes up. Any ideas on policy etc? I have taken the "Limited" off the Google business page - so hopefully this will change over time - but I can't find any information on why google would do something like this. If you had shed any light on this - would be much appreciated.
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Indexing non-indexed content and Google crawlers
On a news website we have a system where articles are given a publish date which is often in the future. The articles were showing up in Google before the publish date despite us not being able to find them linked from anywhere on the website. I've added a 'noindex' meta tag to articles that shouldn't be live until a future date. When the date comes for them to appear on the website, the noindex disappears. Is anyone aware of any issues doing this - say Google crawls a page that is noindex, then 2 hours later it finds out it should now be indexed? Should it still appear in Google search, News etc. as normal, as a new page? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0 -
What are the different tactics for getting ranked/ included in Google finance searches such as http://www.google.com/finance/company_news?q=NASDAQ:ADBE
I don't know what ranking factors they are using for this feed. The results vary greatly from a search done at google.com or google.com/news and google.com/finance I'm working with a website that regularly publishes finance-related news and currently gets traffic from google finance. I'm wondering what we can do to optimize our news articles to possibly show more prominently or more often. Thanks
Intermediate & Advanced SEO | | joemascaro0 -
Does anyone know if certain DMOZ categories are blocked/never get indexed on google?
Hi all, After waiting many months I was happy to see a certain site listed on DMOZ, then months later still haven't seen the dmoz category indexed in google. It makes me wonder if certain categories don't get indexed or blocked or even previously penalized by google. The category in question is a regional one : http://www.dmoz.org/Regional/North_America/United_States/New_Jersey/Localities/G/Garfield/Business_and_Economy/ Anyone come across this before? Dave
Intermediate & Advanced SEO | | davebrown19750