Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • We're looking to implement structured data for our social profiles and logo, as referenced here: https://developers.google.com/search/docs/data-types/social-profile https://developers.google.com/search/docs/data-types/logo Should we add the markup for these structured data types to multiple pages, the homepage only, or all indexable pages? TIA

    | Allie_Williams
    0

  • Hi Moz community, I had implemented schema markup for companies with multiple branches. We setup an organization with multiple points of contact tag, a Local Business tag for each branch and tags for specific products/ services - all through Google tag manager. I managed to fix the product markup with a small update to the code I found in a Google forum but have been unable to revive our local business markup. The tags of the schema mark up are active but when I run the Google structured data testing tool it doesn't find any schema tags. We are seeing some of the tags show up in search console but not all of them. Has anyone else had this problem and found a solution? Or, do you have any recommendations on how to markup an organization with multiple branches? Should we have one overall organization tag and a separate one for each business or is there another way of presenting each branch? Appreciate any insight!

    | Alexanders
    0

  • I moved my website from Wix to Wordpress in May 2018. Since then, it's disappeared from Google searches. The site and pages are indexed, but no longer ranking. I've just started a Moz campaign, and most pages are being flagged as having "thin content" (50 words or less), when I know that there are 300+ words on most of the pages. Looking at the page source I find this bit of code:  page contents Does this mean that Google is finding this and thinks that I have only two words (page contents) on the page? Or is this code to grab the page contents from somewhere else in the code? I'm completely lost with this and would appreciate any insight.

    | Photowife
    1

  • This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
    Disallow: /?mobile=1 Thank you

    | AmandaBridge
    0

  • We have a blog on our site where the homepage and category pages have "noindex,follow" but the articles have "index,follow". Recently we have noticed that the article pages are no longer showing in the Google SERPs (but they are in Bing!) - this was done by using the "site:" search operator. Have double-checked our robots.txt file too just in case something silly had slipped in, but that's as it should be... Has anyone else noticed similar behaviour or could suggest things I could check? Thanks!

    | Nobody1556905035114
    0

  • Hello! I'm rolling out hreflang tags in my client's "main" XML Sitemap. My question is:  do we need to implement these tags in the country level XML Sitemaps also? Thanks!

    | SimpleSearch
    1

  • Hi all, Recently, got an unusual issue with images in Google index. We have more than 1,500 images in our sitemap, but according to Search Console only 273 of those are indexed. If I check Google image search directly, I find more images in index, but still not all of them. For example this post has 28 images and only 17 are indexed in Google image. This is happening to other posts as well. Checked all possible reasons (missing alt, image as background, file size, fetch and render in Search Console), but none of these are relevant in our case. So, everything looks fine, but not all images are in index. Any ideas on this issue? Your feedback is much appreciated, thanks

    | flo_seo
    1

  • Our parent company is looking to absorb the domain of a brand we are discontinuing. The domain we want to absorb has a thousands of blog posts from 2010 onward. Much of the content is old but still high-converting. We would like to keep as much of the potential traffic as possible, but we don't want the parent website to become too large or lose credibility with too many 301 redirects. Any advice on the best way to do this?

    | NichGunn
    1

  • Hey guys, I've read mixed reviews on this. Does anyone have an answer to whether or not hiding h1 text  ( ) negatively effects SEO in 2018. Thanks for the help!

    | Jason-Reid
    0

  • We have around 17 pages have underscores in the URL. From the 17 pages, we have changed 3 pages URL for example if the url is test_sample_demo.html, we have changed as test-sample-demo.html After the updates, we have made redirect as follows Redirect 301  test_sample_demo.html test-sample-demo.html Presently google webmaster tool shows as "Pages with duplicate meta descriptions" & "Pages with duplicate title tags" for changed pages How to fix this. Please help us

    | Intellect
    0

  • Hi SEO Experts, If I add link on Image without alt tag, Would Search Console be shown anchor text as BLANK?

    | Rajesh.Prajapati
    1

  • We have the main pricing page here: https://www.eginnovations.com/product/pricing Then depending on what you click, you'll be taken to the appropriate form on one of these pages: https://www.eginnovations.com/product/request-quote?pricetype=audit https://www.eginnovations.com/product/request-quote?pricetype=saas https://www.eginnovations.com/product/request-quote?pricetype=perpetual https://www.eginnovations.com/product/request-quote?pricetype=subscription How should I handle these? Noindex, follow? Set a canonical? I keep getting notifications that these are duplicate content, but it's just a way to keep the form fills organized. Thanks for your help!

    | eGInnovations
    1

  • Hi all. I know there several threads related to noindexing blog archives and category pages, so if this has already been answered, please direct me to that post. My blog archive pages have preview text from the posts. Each time I post a blog, the last post on any given archive page shifts to the first spot on the next archive page. Moz seems to report these as new duplicate content issues each week. I have my archive pages set to meta noindex, so can I feel good about continuing to ignore these duplicate content issues, or is there something else I should be doing to prevent penalties? TIA!

    | mkupfer
    1

  • Why it's happening? How to fix it? This problem is on this website http://www.skydas.lv http://www.m-lux.lv Supporta metāla durvis dzīvoklim ngm_445922407.jpg

    | Felter
    0

  • Hi all, I'm hoping someone can help me solve this once and for all! I keep getting hreflang errors on our site crawls and I cannot understand why. Does anything here look off to you? Thank you! JGdWcqu

    | eGInnovations
    1

  • I work on a big eCommerce site with thousands of pages.  We are talking about crating a separate image sitemap. Any idea example of an eCommerce site who has a separate image sitemap? I looked several and cant find one. Also, what are the best practices for creating a good image sitemap? thanks!

    | bizuH
    0

  • Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
    Jens

    | WeAreDigital_BE
    0

  • Hey, I have migrated my website from http to https. But, all my ranking is dropped from first page to 3 or 4 page or some keywords are disappeared. I have redirected all my urls to https and done everything properly. Please help me. My website is justinterio.com

    | vikrantrathore
    1

  • Hi everyone, When creating my XML sitemap here it is only able to detect a portion of the website. I am missing at least 20 URLs (blog pages + newly created resource pages). I have checked those missing URLs and all of them are index and they're not blocked by the robots.txt. Any idea why this is happening? I need to make sure all wanted URLs to be generated in an XML sitemap. Thanks!

    | Taysir
    0

  • We currently post our new blog post on our company profile (in Linkedin) immediately after publishing on our company blog. Is this running the risk of duplicate content? My colleague pastes the new posts URL into (I've given an example using my own site - ironically about Panda)! Does this run the risk of creating duplicate content? I've tried searching for indexed pages in Google for past posts and only find the originals (no URLs for the LinkedIn domain). I've got a bit confused about the whole subject having read Neil Patel's excellent article on "...syndicating content without screwing up you seo"! Thank you 🙂 10FpxLP 10FpxLP

    | Catherine_Selectaglaze
    1

  • Hello! I have a question I've been unable to find a clear answer to. My client's primary domain is a .com with a satisfactorily high DA. My client owns the .org version of its domain (which has a very low DA, I suppose due to inactivity) but has never forwarded it on. For branding/visibility/traffic reasons, I'd like to recommend they set up the .org domain to forward to the .com domain, but I wanted to ask a few questions first: 1. Does forwarding low-value DA domains to high-value DA domains have any negative authority/SEO impact? 2. If the .org domain was to be forwarded, am I correct that an SSL cert is not necessary for it if the .com domain has an SSL cert? Thanks in advance!

    | mollykathariner_ms
    1

  • Hello everyone, Would like to seek your advice. Our company classified web pages currently set 301 redirection for product listing (expired) -> relevant category pages. At the same time, remove this webpage URL from the sitemap as well. In some cases, users reactivated the expired ads. In this case, the page will become status 200 again, also re-included in the sitemap again. Wondering if Search engines able to pick up and index the same webpage again? Thanks in advance!
    Raymond

    | raysamu
    0

  • Hi all, I have a question related to the breadcrumb. We have an e-commerce site. There is a difference in the breadcrumb when navigating to our products vs directly browsing to the URL of the product. When you navigate to the product the breadcrumb looks like this (also in the source code):
    Home > Sand > Sandpit sand > Bigbag Sandpit sand type xyz When you visit the product URL directly, the breadcrumb looks like this (also in the source code):
    Home > Bigbag Sandpit sand type xyz Looks to me that can be confusing for a search engine and that it is unclear what the site's structure/hierarchy is like (and also for a user of course). Is that true? If yes, does this have a big direct negative impact looking at SEO? Thanks in advance!

    | AMAGARD
    0

  • Please check my url that is https://www.drmalpani.com , it was first build up in Joomla and then for security reason, I moved from Joomla to Laravel but in that mean time, I have facing steep fall in organic traffics. It was earlier 10,000 users per day and now I can see only 2500 users per day. I don't know what went wrong, each url are same as it was in Joomla as well I have worked out on all 404 urls too. Looking forward to your guidance on this. Thanks, Bharat

    | DrMalpani
    0

  • Hi there - I'm not a SEO so help would be appreciated! Moz is telling me we have a redirect loop but the URLs are the same. https://www.example.com/ to https://www.example.com/ Why is my homepage creating a redirect loop to itself? We use Wordpress and I do not have any redirects listed for our homepage. Could this have something to do with switching to https in April? Thanks, Katherine

    | kmmartin
    0

  • I'm doing some log file analysis and need to run a reverse DNS check to ensure that I'm analysing logs from Google and not any imposters. Is there a command I can use in terminal to do this? If not, whats the best way to verify Googlebot? Thanks

    | daniel-brooks
    0

  • MOZ is indicating that i have several URLs that are too long. Should I shorten the URLs and redirect the long URLs to the new, shorter, URL? Or should i leave them alone, as I've been reading to avoid redirects.

    | Hanover440
    1

  • Hello, We have moved our site to https, Google Analytics seems to be tracking correctly. However, I have seen some conflicting information, should I create a new view in analytics? Additionally, should I also create a new https property in Google search console and set it as the preferred domain? If so, should I keep the old sitemap for my http property while updating the sitemap to https only for the https property? Thirdly, should I create a new property as well as new sitemaps in Bing webmaster? Finally, after doing a crawl on our http domain which has a 301 to https, the crawl stopped after the redirect, is this a result of using a free crawling tool or will bots not be able to crawl my site after this redirect? Thanks for all the help in advance, I know there are a lot of questions here.

    | Tom3_15
    0

  • Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,

    | NickG-123
    0

  • some of our pages under a specific website section gets deleted from another data source and we want to resolve the problem of 404s can we set up automated 301 redirects to the main page as soon as one of these pages are deleted

    | lina_digital
    2

  • Hi All, I have a WordPress website which has about 10-12 pages in total. When I search for the brand name on Google Search, the home page URL isn't appearing on the result pages while the rest of the pages are appearing. There're no issues with the canonicalization or meta titles/descriptions as such. What could possibly the reason behind this aberration? Looking forward to your advice! Cheers

    | ugorayan
    0

  • I Recently Make a Site Gizmocombot.com. tHE aITE has NO spam Record NO lousy BACKLINK.it has all unique article can anyone tell us how we can unpenalized our site from google webmaster and google search Result. i attcead a screenshot as well yoou need. 3nzmALp

    | litoginamaaba333
    2

  • HI Guys, do you think that the following guide is enoght to move a bluehost wordpress site to https in a seo best practive way? https://www.shoutmeloud.com/free-ssl-certificate-bluehost-hosting.html Basically their steps are: Install SSL on Bluehost panel Install Really Simple SSL Wp Plugin Edit Your .htacess File & Add The Code For HTTP To HTTPS Redirection Update All HTTP URLs In Database To HTTPS Using Search and Replace Plugin Use Broken Link Checker plugin & use its redirection module to find links to 3rd party sites with HTTP that should now be HTTPS. Last thing to do Submit your new HTTPS site to Google Search Console & submit your sitemap. Update your profile link on Google Analytics. Update your website links on social media profiles & anywhere else they exist. This step you can do in pieces in the coming days. Read this guide to learn more about HTTP to HTTPS migration & fixing mixed content. If you disabled Who.Is guard for your domain name, you can enable it now. Do you know a better practical guide for wordrpess? in term of usefull plugins to handle the migration? Tx to everyone!

    | Dreamrealemedia
    0

  • Hi Moz Community, On April 22 my site received a manual action in Google Webmaster telling me it's caused by unnatural links. After some a deep cleaning of all the sitewide links, which I think is the major problem of my external links, I requested a reconsideration request on May 4. And Google rejected my reconsideration request of unnatural link manual action on May 29, and list one blog article twice as example, which is quite weird to me. Is it normal for Google to list one URL twice as example in the feedback? I don't quite see the reason for that. Does anybody have any idea about that? This is really quite frustrating to me. And to be honest, I don't see much problems about the article Google listed as well. Yeah it's all about our product and it has 3 do-follow links to our site. But it contains no words such as sponsor, advertisement, or rewards... And the blog itself is quite healthy as well. The post also get rather high engagement, with organic comments and shares. How did Google flag that out? I don't think it's possible that Google will go into all our site links one by one... Hope you guys can help me with that. Thanks in advance! Ben

    | Ben_fotor
    0

  • Hi All, I currently have an issue whereby by domain name (just homepage) has: mydomain.com and: mydomain.com/ Moz crawler flags this up as duplicate content - does anyone know of a way I can fix this? Thanks! Jack

    | Jack1166
    0

  • Is there a way to protect your sitemap.xml so that only Google can read it and would it make sense to do this?

    | Roverandom
    0

  • Hi, We have recently lost the review rich snippet for product that used to consistently show it for a long time. On top of that we have also lost the breadcrumb. The markup hasn't changed on these pages and gwt data tool tester doesn't show any anomalies. A few weeks back we have deployed a new property that list reviews from the first site without actually being under the same domain. Would that be an issue, review may have been considered as plagiarism somehow? Is there a way for us to confirm this theory? What are others factors that may have lead us to lose those rich snippet? Thanks

    | mattam
    0

  • Hi all, we're going to changing the url structure of our website. Moving from: /news-and-views/blog/thisistheblogpost to /blog/thisistheblogpost (ie removing the /news-and-views/ portion of the url). There's approximately 300 posts. I have 2 questions - will this have an impact on domain wide authority? and is bulk 301 redirects the best solution? The blogs themselves don't get a huge amount of traffic but we just want to make sure we don't do something that will be penalized by search engines.

    | francisclark
    2

  • I want to clean up my tags and I'm worried I'm going to look in my webmasters the next day with hundreds of errors. Whats the best way of doing this?

    | howlusa
    0

  • We recently implemented rel=canonicals on a few of our pages to prevent query parameters from showing up in the SERPs. The two pages we added the tags to are no longer ranking. The pages used to rank very well for branded terms such as "morningstar direct" and "morningstar sustainability", but now don't show up at all. When you search for the urls specifically, for example "products/direct morningstar" the query parameter is still indexing.  Does anyone know why this might be or what we can do to fix this issue? The two pages are www.morningstar.com/products/direct and https://www.morningstar.com/company/sustainability

    | jmigdal
    0

  • On the Bing search engine, it isn't showing the actual meta title we have for a website. It's showing something different. However, the correct meta title is showing on the Google search engine. Has anyone had the same issue? Has anyone been able to fix this issue? Thanks for your help!

    | Harrison.Stickboy
    0

  • I recently migrated to a https version of a volusion site. They have some type of internal 301 redirect method to accommodate for the entire site.  I have also used the 301 redirect manager to redirect categories and pages which I have changed. The question is if I have changed a page internally in the redirect manager from say source.    /bluewidget           to say.            target.        /superbluewidget is it wiser or even possible to do it this way to reduce the redirect chain from 3 to 2 steps source.      /bluewidget                 to.      target               https://www.example/superbluewidget can a relative link be targeted to a full url to reduce steps in a 301 redirect link chain. Thanks

    | mrkingsley
    0

  • I have a quick question that I hope someone might be able to help me with. On a wordpress website I have a lot of posts in each category. My problem is there are now several category pages. ie: https://www.mywebsite.com/category/cat-name/ https://www.mywebsite.com/category/cat-name/page2 https://www.mywebsite.com/category/cat-name/page3 The problem is on the category page I can set page title / description etc. But the problem is I cant do that on page2 / page 3 etc. Does anyone know how I can change the titles and decriptions etc on those pages. Thanks

    | DaleZon
    0

  • Is there a tool out there to check any domain that might be pointing to my existing domain?

    | adlev
    0

  • I understand what canonicalization does, however I'm a bit confused on one point. Generally, of course it's used to determine the main article out of two which are identical. But what happens to the keywords if the content isn't quite identical? Example:-
    Let's say the 'first page' it is optimised for 'racing cycles'.
    The 'second page' is optimised for 'second-hand racing cycles' Let's assume that the 'first page' doesn't have any reference to 'used' or 'second-hand' so it would be essentially unrelated to the 'second page'. If I then add an canonical tag to the 'second page' that points to the 'first page' in theory, the 'second page' will drop from the search rankings and pass any link authority back to the 'first page' What I want to know is will the 'first page', then rank for the keywords that the second page used to rank for? (in this case 'second-hand racing cycles')

    | seoman10
    0

  • On the crawl of our website we have had a warning that 157 have urls that are too long. When I look at the urls they are generally from 2016 or earlier. Should I just leave them as they are or shorten the urls and redirect to new url? Thanks

    | DaleZon
    4

  • Hi All, I have a question I hope someone can help me with. I ran a scan on a website and it has a stack of urls that are far too long. I am going through and changing the urls to shorter ones. But my question is regarding redirections. Wordpress seems to be automatically redirecting the old urls to the new ones, should i be adding a more solid 301 in as well or is the wordpress redirect enough? I ask as they dont all seem to stay redirecting Thanks in advance for  the help

    | DaleZon
    2

  • Hi. I have a set of affiliate links on my website. That's 50 unique links in total. They are placed in a separate folder /go/, and are all nofollowed. My website has 100 pages. When doing a site audit I get a report with 50 affiliate links pr. page x100. That's a huge report. Is this really necessary? I can ofc. choose to ignore the automated reports, but I can't help but to think that there must be an easier way to keep track of it? Like, if Link 1-50 is located in /go/, they should automatically be nofollowed, regardless of where they are located on the root/sub domain. Bonus question: The format of the link is https://exampledomain.com/go/brandname I am wondering if I should add that "/" at the end of it. If yes, why? If no, why?

    | Dan-Louis
    0

  • Hi, Does anyone know if search console errors showing as follows are damaging to serps: /xmlrpc.php is returning 405 error /wp-admin/admin-ajax.php is returning 400 error These errors seem to of coincided almost to the day that there was a ranking drop for the primary keyword from mid page 1 to bottom of page 2. No matter what I do I cannot seem to correct these errors. Any advice would be greatly appreciated. Thanks

    | DaleZon
    0

  • I recently began work on an existing Wordpress website that was revamped about 3 months ago. https://thedoctorwithin.com. I'm a bit new to Wordpress, so I thought I should reach out to some of the experts in the community.Checking ‘Not found’ Crawl Errors in Google Search Console, I notice many irrelevant links that are not present in the website, nor the database, as near as I can tell. When checking the source of these irrelevant links, I notice they’re all generated from various pages in the site, as well as non-existing pages, allegedly in the site, even though these pages have never existed. For instance: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/feedback-and-testimonials/        allegedly linked from: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/ (doesn’t exist) In other cases, these goofy URLs are even linked from the sitemap. BTW - all the URLs in the sitemap are valid URLs. Currently, the site has a flat structure. Nearly all the content is merely URL/content/ without further breakdown (or subdirectories). Previous site versions had a more varied page organization, but what I'm seeing doesn't seem to reflect the current page organization, nor the previous page organization. Had a similar issue, due to use of Divi's search feature. Ended up with some pretty deep non-existent links branching off of /search/, such as: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/consultations/  allegedly linked from: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/ (doesn't exist). I blocked the /search/ branches via robots.txt. No real loss, since neither /search/ nor any of its subdirectories are valid. There are numerous pre-existing categories and tags on the site. The categories and tags aren't used as pages. I suspect Google, (and other engines,) might be creating arbitrary paths from these. Looking through the site’s 404 errors, I’m seeing the same behavior from Bing, Moz and other spiders, as well. I suppose I could use Search Console to remove URL/category/ and URL/tag/. I suppose I could do the same, in regards to other legitimate spiders / search engines. Perhaps it would be better to use Mod Rewrite to lead spiders to pages that actually do exist. Looking forward to suggestions about best way to deal with these errant searches. Also curious to learn about why these are occurring. Thank you.

    | linkjuiced
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.