Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
-
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
https://studyplaces.com/about-us/
The pages affected by this include:
https://studyplaces.com/50-best-college-party-songs-of-all-time-and-why-we-love-them/
https://studyplaces.com/15-best-minors-for-business-majors/
As you can see the content on these pages is totally unrelated to the content on the about-us page. Any ideas why this is happening and how to resolve.
-
It seems like Google only has a handful of distinct pages indexed at the moment - whereas bing has about 10x as many. So that seems to indicate something wrong specifically for Google.
I'd start by checking your search console - are there any errors? If you use the URL inspection tool and visit the URL in question (studyplaces.com/15-best-minors-for-business-majors/) does it tell you why it has been canonicalised? What happens if you view the page as Google saw it? Is there any chance that you are blocking googlebot / cloaking in any way? Have you had any website outages or downtime?
As others noted, your about page is now missing - did you do that deliberately to see if it resolved this issue?
-
My Friend,
I don't have access to the backend and to search console, so it is hard to make a precise determination of what's causing your issue.
I think this article might be helpful to you. https://www.signalogic.com/index.pl?page=google_canonical_page_algorithm_problems
Also, the pages they deindexed have mobile issues as you can see here
https://search.google.com/test/mobile-friendly?utm_source=gws&utm_medium=onebox&utm_campaign=suit
And I see on brings catch that some pages are blocking the content with a popup.
The about page is now 404 (maybe you did that on purpose to get the index for the others back? )
I am sure you will figure it out when you do, please come back here to share your experience.
I wish you all the best in resolving this soon!
Regards,
Joseph Green
-
Your About Us link is broken. That needs to be fixed first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which pages should I index or have in my XML sitemap?
Hi there, my website is ConcertHotels.com - a site which helps users find hotels close to concert venues. I have a hotel listing page for every concert venue on my site - about 12,000 of them I think (and the same for nearby restaurants). e.g. https://www.concerthotels.com/venue-hotels/madison-square-garden-hotels/304484 Each of these pages list the nearby hotels to that concert venue. Users clicking on the individual hotel are brought through to a hotel (product) page e.g. https://www.concerthotels.com/hotel/the-new-yorker-a-wyndham-hotel/136818 I made a decision years ago to noindex all of the /hotel/ pages since they don't have a huge amount of unique content and aren't the pages I'd like my users to land on . The primary pages on my site are the /venue-hotels/ listing pages. I have similar pages for nearby restaurants, so there are approximately 12,000 venue-restaurants pages, again, one listing page for each concert venue. However, while all of these pages are potentially money-earners, in reality, the vast majority of subsequent hotel bookings have come from a fraction of the 12,000 venues. I would say 2000 venues are key money earning pages, a further 6000 have generated income of a low level, and 4000 are yet to generate income. I have a few related questions: Although there is potential for any of these pages to generate revenue, should I be brutal and simply delete a venue if it hasn't generated revenue within a time period, and just accept that, while it "could" be useful, it hasn't proven to be and isn't worth the link equity. Or should I noindex these "poorly performing pages"? Should all 12,000 pages be listed in my XML sitemap? Or simply the ones that are generating revenue, or perhaps just the ones that have generated significant revenue in the past and have proved to be most important to my business? Thanks Mike
Technical SEO | | mjk260 -
Large Drop in Indexed Pages But Increase in Traffic
We run a directory site and noticed about a week ago that Google Webmaster Tools was reporting a huge drop in indexed pages (from around 150,000 down to 30,000). In the same time, however, our traffic has increased. Has anyone seen this before or have any ideas on why this could happen? I have search for technical errors but nothing has changed on our site or our content.
Technical SEO | | sa_787040 -
What should i do to index images in google webmaster?
My website onlineplants.com.au. It's a shopping cart website. I do have nearly 1200 images but none of the images are indexed in google webmaster? what should i do. Thanks
Technical SEO | | Verve-Innovation1 -
Web Page Dropped Out of Google?
One of our web pages seems to have completely dropped out of Google after featuring on page 1 for a number of years. It can't be a site wide issue as all other web pages are performing as normal. The page is http://www.contractormoney.com/income-protection/ and the key phrase it was performing well for was 'contractor income protection'. Any ideas??
Technical SEO | | Pete40 -
Does Google Parse The Anchor Text while Indexing
Hey moz fanz, I'm here to ask a bit technical and open-minding question.
Technical SEO | | atakala
In the Google's paper http://infolab.stanford.edu/~backrub/google.html
They say they parse the page into hits which is basically word occurences.
But I want to know that they also do the same thing while keeping the anchor text database.
I mean do they parse the anchor text or keep it as it is .
For example, let's say my anchor text is "real car games".
When they indexing my link with anchor text, do they parse my anchor text as hits like
"real" distinct hits
"car" distinct hits
"games" distinct hits.
OR do they just use it as it is. As "real car games"0 -
Duplicate Page Content / Rel Canonical
Hi, The diagnostics shows me that I have 590 Duplicate Page Content , but when it shows the Rel Canonical I have over 1000, so dose that mean I have no Duplicate Page Content problem? Please help.
Technical SEO | | Joseph-Green-SEO0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Having trouble removing homepage from google
For various reasons my client wants their homepage removed from google, no just the content of the page off but the page not to be indexed (yep strange request but we are mere service providers) today I requested in webmaster tool that default.asp was removed. Wht says done but the sites homepage is still listed. The page also has a no index tag on but 24 hours and 18k Google bot hits later it still remains. Anyone got any other suggestions to deindex just the homepage asap please
Technical SEO | | Grumpy_Carl0