Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • Does anyone have any recommendations on how you can tell Google (hopefully via a URL) not to index that page of a website? I have tried through SEO Yoast to hide certain sitemaps (which has worked to a degree) but certain functionalities of Wordpress websites show links without them actually being part of a "sitemap" so those links are harder to hide. I'm having an issue with one of my websites - the sitelinks that Google is suggesting are nowhere near the most popular pages and I know that you can't make recommendations through Google not to show certain pages through Search Console. anymore. Any suggestions are greatly appreciated! Thanks!

    | MainstreamMktg
    0

  • I am a firm believer in the fundamentals of SEO but is there any data to support its impact positively or negatively towards a sites rank?

    | Brandonp
    0

  • Hey guys, I have a bizarre situation on my hands. I have a URL that is being wonky. The url is redirecting to another url and the 301 redirect is not in my htaccess. There is a 301 redirect in my htaccess but is being overwritten by something else, i.e. whatever is happening in above. So basically URL A should be redirecting to URL B but instead its going to URL C. I know we were not hacked, it's not redirecting to a strange bizarre domain. I have also disabled all of our plugins that redirect (to my knowledge) Any thoughts would be great!

    | HashtagHustler
    0

  • Hello, We are working on a new site. The idea of the site is to have an ecommerce shop, but the homepage will be a content page, basically a blog page.
    My developer wants to have the blog (home) page on a subdomain, so blog.example.com, because it will be easier to make a nice content page this way, and the the rest of the site will just be on the root domain (example.com). I'm just worried that this will be bad for our SEO efforts. I've always thought it was better to use a sub folder rather than a subdomain. If we get links to the content on the subdomain, will the link juice flow to the shop, on the root domain? What are your thoughts?

    | pinder325
    0

  • Hi there, You might have experienced this before but for me this is the first. A client of mine moved from domain A (www.domainA.com) to domain B (www.domainB.com). 301 redirects are all in place for over a year. But the old domain is still showing in Google when you search for "site:domainA.com" The HTTP Header check shows this result for the URL https://www.domainA.com/company/cookie-policy.aspx HTTP/1.1 301 Moved Permanently => 
    Cache-Control => private
    Content-Length => 174
    Content-Type => text/html; charset=utf-8
    Location => https://www.domain_B_.com/legal/cookie-policy
    Server => Microsoft-IIS/10.0
    X-AspNetMvc-Version => 5.2
    X-AspNet-Version => 4.0.30319
    X-Powered-By => ASP.NET
    Date => Fri, 15 Mar 2019 12:01:33 GMT
    Connection => close Does the redirect look wrong? The change of address request was made on Google Console when the website was moved over a year ago. Edit: Checked the domainA.com on bing and it seems that its not indexed, and replaced with domainB.com, which is the right. Just Google is indexing the old domain! Please let me know your thoughts on why this is happening. Best,

    | iQi
    0

  • Due to Poor unsightly look of breadcrumbs and the space it takes up above the fold we only employ breadcrumbs on our desktop version. Breadcrumbs are hidden from view on mobile version. However as mobile first indexing is now in play what technical SEO impacts will this have?    one thing that comes to mind is crawling deeper pages where breadcrumbs made them accessible in less than 3 link clicks?  But i am unsure now of the impacts of not having breadcrumbs visible for mobile version of our site.

    | oceanstorm
    0

  • We have a store with thousands of active items and thousands of sold items. Each product is unique so only one of each. All products are pinned and pushed online ... and then they sell and we have a product page for a sold item. All products are keyword researched and often can rank well for longtail keywords Would you :- 1. delete the page and let it 404 (we will get thousands) 2. See if the page has a decent PA, incoming links and traffic and if so redirect to a RELEVANT category page ? ~(again there will be thousands) 3. Re use the page for another product - for example a sold ruby ring gets replaces with ta new ruby ring and we use that same page /url for the new item. Gemma

    | acsilver
    0

  • I have 5 URLs that are "missing titles" however, all 5 are landing pages that were created in Pardot. how would I go about adding the missing title? Would I need to add it on our website platform or in Pardot?

    | cbriggs
    0

  • I have been getting conflicting advice on the best way to implement schema for the following scenario. There is a central e-commerce store that is registered to it's own unique address which is "head office". There are a few physical shops each of which has their own location and address. Each shop has its own landing page within /our-stores/. So each page on the website has the Organisation schema for the central 'organisation', something like: Then on each physical store landing page is something like the following as well as the Organisation schema: Is this correct? If it is should I extend LocalBusiness with store URL and sameAs for GMB listing and maybe Companies House registration? It's also been suggested that we should use LocalBusiness for the head office of the company, then Departmentwith the typeStore.  But i'm not sure on that?

    | MickEdwards
    0

  • I hope I am explaining this correctly. If I need to provide any clarity please feel free to ask. We currently use a domain mask on an external platform that points back to our site. We are a non-profit and the external site allows users to create peer-to peer fundraisers that benefit our ministry. Currently we get many meta issues related to this site as well as broken links when fundraisers expire etc. We do not have a need to rank for the information from this site. Is there a way to index these pages so that they are not a part of the search engine site crawls as it relates to our site?

    | SamaritansPurse
    0

  • what does optimal use of keywords in header tag actually mean given you indicate this as hurting seo factor?

    | Serg155
    0

  • When  I run a wordpress blog through the structured data testing tool I see that there is @type hentry. Is this enough for blogs etc? Is this a result of Wordpress adding in this markup? Do you recommend adding @blogposting type and if so why? What benefit to add a specific type of schema? How does it help in blogging? Thanks

    | AL123al
    4

  • Due to the restraints of category page layout many of the products in certain categories have the product titles truncated, in some cases missing off 2-5 words depending on the product in question.  The product name which displays on the category page is lifted straight from the product page itself, so not possible to do something like "product name including spec..."  to place ... to indicate a bit more. I'm assuming not but just wanted to check that Google will not frown on this.  Text is not being hidden it just does not render fully in the restricted space.  So there is a scenario of 'bits of' text in the source not displaying on the rendered page.

    | MickEdwards
    0

  • Hi there, In October, one of our customer's programmer made a change on their website to optimize its loading speed. Since then, the all the SEO's metrics has dropped. Apparently, the change was to move to CloudFlare and to add Gzip compression. I was talking with the programmer and he told me he had no idea why that happened. Now comes 5 months later and the SEO metrics havn't come back yet. What seems so wierd is that two keywords in particular had the most massive drop. Those two keywords were the top keywords (more than 1k of impressions a month) and now its like there is no impressions or clics at all. Did anyone had the same event occur to them? Do you have any idea what could help this case?

    | H.M.N.
    0

  • Hi, I am working on a large global site which has around 9 different language variations. We have setup the hreflang tags and referenced the corresponding content as follows: (We have not implemented a version X-default reference, as we felt it was not necessary) Using DeepCrawl and Search Console, we can see that these language variations are causing duplicate title issues. Many of them. My assumption was that the hreflang would have alleviated this issue and informed Google what is going on, however i wanted to see if anyone has any experience with this kind of thing before. It would be good to understand what the best practice approach is to deal with the problem. Is it even an issue at all, or just the tools being over-sensitive? Thank you in advance.

    | NickG-123
    0

  • Hi, I've seen a fair amount of topics speaking about the difference between domain names ending with or without trailing slashes, the impact on crawlers and how it behaves with canonical links.
    However, it sticks to domain names only.
    What about subfolders and pages then? How does it behaves with those? Say I've a site structured like this:
    https://www.domain.com
    https://www.domain.com/page1 And for each of my pages, I've an automatic canonical link ending with a slash.
    Eg. rel="canonical" href="https://www.domain.com/page1/" /> for the above page. SEM Rush flags this as a canonical error. But is it exactly?
    Are all my canonical links wrong because of that slash? And as subsidiary question, both domain.com/page1 and domain.com/page1/ are accessible. Is it this a mistake or it doesn't make any difference (I've read that those are considered different pages)? Thanks!
    G

    | GhillC
    0

  • Hi Guys, In organic SERPS Google pulling incorrect product image, instead of product image its showing image from relevant products, Checked the structured data, og:image everything is set to the product image, not sure why google showing images from relevant product sidebar, any help, please?

    | SpartMoz
    0

  • Hi everyone, I have a global client with lots of duplicate page issues, mainly because they have duplicate pages for US, UK and AUS... they do this because they don't offer all services in all markets, and of course, want to show local contact details for each version. What is the best way to handle this for SEO as clearly I want to rank the local pages for each country. Cheers

    | Algorhythm_jT
    0

  • Just a quick question re implementation of JSON-ID breadcrumbs You are here: Acme Company → Electronics → Computers → Laptops So in this example laptops is my current page without a link on the visible on-page breadcrumb. When implementing JSON-LD BreadcrumbList should  Laptops be included in the schema snippet, or commence from Computers to home?

    | MickEdwards
    0

  • Has anyone used Botify? Is this type of software necessary for a site with under 5K pages?

    | SoulSurfer8
    0

  • Hi all, I hope somebody can share their thoughts on the below. A web designer launched my client's new website and I have been tasked with the SEO. I was approached with an immediate problem, www.clientswebsite.co.uk was ranking 9th for their company name after being indexed by Google. The search results above www.clientswebsite.co.uk were related to my client but not all, for example a direct competitor was also ranking. I have been working on the SEO for 2-3 weeks and I just managed to get to 3rd position for the company name, and then www.clientswebsite.co.uk disappeared from page 1! And now instead, an irelevant sub page is now ranking for the company name on page 2 (a contact page). I have checked and the home page is still indexed (did a site: check). The only problem software picks up is a redirect chain (http://homepage -> http://www.homepage  -> https://homepage) the web developers said it wouldn't impact rankings (when I asked them to edit the htaccess file to fix it) I've listed below the SEO tasks I completed whilst attempting to rank the company name: I set up analytics and webmaster tools, in which I set up preferred domain (www) Added a sitemap Edited meta data making sure company name was included I contacted the websites above www.clientswebsite.co.uk that were relevant and asked them to place a link linking to their new website, I was successful with a couple of these. I placed www.clientswebsite.co.uk on all of their social media profiles I reformatted headers on their home page, making sure the H1 included my client's company name I found 2 extra versions of my client's home page (not exact copies, but very similar content) that had been published, so I decided to 301 redirect these to the correct home page Activated SSL and forced to HTTPS I would really appreciate it if anyone could share their thoughts here, whether it be explanations or possible solutions Adam

    | SO_UK
    0

  • Hi all, I have a question regarding changing the size of my sitemaps. Currently I generate sitemaps in batches of 50k. A situation has come up where I need to change that size to 15k in order to be crawled by one of our licensed services. I haven't been able to find any documentation on whether or not changing the size of my sitemaps(but not the pages included in them) will affect my rankings negatively or my SEO efforts in general. If anyone has any insights or has experienced this with their site please let me know!

    | Jason-Reid
    0

  • Hi, i want to redirect from this old link http://www.g-store.gr/product_info.php?products_id=1735/ to this one https://www.g-store.gr/golf-toualetas.html I have done several attempts but with no result. I anyone can help i will appreciate. My website runs in an Apache server with cpanel. Thank you

    | alstam
    0

  • Hello, I have a broken plugin creating hundreds of WP-Content directory pages being indexed by Google. I can not access the source code of these pages to add a noindex to them. The page URL's all have the plugin name within them.  In order to resolve the issue, I wrote a solution with javascript to dynamically add in a noindex tag to any URL containing the plugin name. Would this noindex be respected by Google and is there a way to immediately check that it is respected? Currently, I can not delete the plugin due to issues with it's php. If you would like to view the code: https://codepen.io/trodrick/pen/Gwwaej?editors=0010 Thanks!

    | Tom3_15
    0

  • Hey guys, I'm getting this weird error when I submit my sitemap to Google. It says I'm getting a 409 error in my post-sitemap.xml file (https://cleargear.com/post-sitemap.xml). But when I check it, it looks totally fine. I am using YoastSEO to generate the sitemap.xml file. Has anyone else experienced this? Is this a big deal? If so, Does anyone know how to fix? Thanks EwTswL4

    | Extima-Christian
    0

  • Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!

    | Tom3_15
    0

  • Hello, Several tools I'm using are returning errors due to "broken canonical links". However, I'm not too sure why is that. Eg.
    Page URL: domain.com/page.html?xxxx
    Canonical link URL: domain.com/page.html
    Returns an error. Any idea why? Am I doing it wrong? Thanks,
    G

    | GhillC
    1

  • Hi Everyone, A crawler from our SEO tool detects a 403 error from a link from our main domain to a a couple of subdomains. However, these subdomains are perfect accessibly. What could be the problem? Is this error caused by the server, the crawlbot or something else? I would love to hear your thoughts.
    Jens

    | WeAreDigital_BE
    0

  • Since most UTM codes/URLs are longer than 70ish characters, is this hurting my SEO? If it is, how can I solve the problem while still using a UTM code? Thanks!

    | Cassie_Ransom
    0

  • I've searched around quite a bit for a solution here, but I can't find anything. I apologize if this is too technical for the forum. I have a Wordpress site hosted on Nginx by WP Engine. Currently it resolves requests to URLs either with or without a trailing slash. So, both of these URLs are functional: <code>mysite.com/single-post</code> and <code>mysite.com/single-post/</code> I would like to remove the trailing slash from all posts, forcing mysite.com/single-post/ to redirect to mysite.com/single-post. I created a redirect rule on the server: ^/(.*)/$ -> /$1 and this worked well for end-users, but rendered the admin panel inaccessible. Somewhere, Wordpress is adding a trailing slash back on to the URL mysite.com/wp-admin, resulting in a redirect loop. I can't see anything obvious in .htaccess. Where is this rule adding a trailing slash to 'wp-admin' established? Thanks very much

    | james-tb
    0

  • When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’.  The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests.  It seems like a simple effective solution.  Are we missing something?

    | CREW-MARKETING
    1

  • We had a spam injection a few months ago.  We successfully cleaned up the site and resubmitted to google.  I recently received a notification showing a spike in 404 errors. All of the URLS have a common word at the beginning injected via the spam: sitename.com/mono
    sitename.com/mono.php?buy-good-essays
    sitename.com/mono.php?professional-paper-writer There's about 100 total URLS with the same syntax with the word "mono" in them.  Based on my research, it seems that it would be best to serve a 410.  I wanted to know what the line of HTACCESS code would be to do that in bulk for any URL that has the word "mono" after the sitename.com/

    | vikasnwu
    0

  • Hello, Revamping an out-of-date website and am wondering if I need to include the folders (categories) in the url structure? The proposed structure has 8 main folders. I've been reading that Google is ok if the folder is not included in the url, but is it really? The hesitation I have is that the urls are getting long and the main folder only has only a sub folder beneath it. So, /folder-name/facility-name/treatment-overview. This looks too long, doesn't it? Thanks!

    | lfrazer123
    0

  • Hi, I was wondering if anyone has seen this behaviour before? I haven't! We have around 20 sites and each one has lost all of its rankings (not in index at all) since the medic update apart from specifying a location on the end of a keyword. I set to work trying to identify a common issue on each site, and began by improving speed issues in insights. On one site I realised that after I had improved the speed score and then clicked "fetch as google" the rankings for that site all returned within seconds. I did the same for a different site and exactly the same result. Cue me jumping around the office in delight! The pressure is off, people's jobs are safe, have a cup of tea and relax. Unfortunately this relief only lasted between 6-12 hours and then the rankings go again. To me it seems like what is happening is that the sites are all suffering from some kind of on page penalty which is lifted until the page can be assessed again and when it is the penalty is reapplied. Not one to give up I set about methodically making changes until I found the issue. So far I have completely rewritten a site, reduced over use of keywords, added over 2000 words to homepage. Clicked fetch as google and the site came back - for 6 hours..... So then I gave the site a completely fresh redesign and again clicked fetch as google, and same result. Since doing all that, I have swapped over to https, 301 redirected etc and now the site is completely gone and won't come back after fetching as google. Uh! So before I dig myself even deeper, has anyone any ideas? Thanks.

    | semcheck1
    1

  • Hi, i have a website with http but now i moved to https. when i apply 301 redirection from http to https & check in semrush it shows unable to connect with https & similar other tool shows & when i remove redirection all other tools working fine but my https version doesn't get indexed in google. can anybosy help what could be the issue?

    | dhananjay.kumar1
    0

  • Hello, I would like my website to remain crawlable to bots, but to block my wp content and media. Does the following robots.txt work? I worry that the * user agent may conflict with the others. User-agent: *
    Disallow: /wp-admin/
    Disallow: /wp-includes/
    Disallow: /wp-content/ User-agent: GoogleBot
    Allow: / User-agent: GoogleBot-Mobile
    Allow: / User-agent: GoogleBot-Image
    Allow: / User-agent: Bingbot
    Allow: / User-agent: Slurp
    Allow: /

    | Tom3_15
    0

  • Hi all, I've been checking these forums for an answer on how to find orphaned pages on my site and I can see a lot of people are saying that I should cross check the my XML sitemap against a Screaming Frog crawl of my site. However, the sitemap is created using Screaming Frog in the first place... (I'm sure this is the case for a lot of people too). Are there any other ways to get a full list of orphaned pages? I assume it would be a developer request but where can I ask them to look / extract? Thanks!

    | KJH-HAC
    1

  • I'm just starting out in freelance SEO & I've taken on a client who is using Craft CMS (version 2.0ish) for their site. I am not even close to being competent enough to manually code via Twig, but I had the main developer install the SEOmatic plugin for me. My question from here is - are there any resources or tips I should be aware of starting out? I just started by updating meta title/descriptions via "New Template Meta(s)" but I'm a bit concerned i'm doing the "template path" thing right - I haven't seen any visible changes in browser, and the SERP preview I'm getting is giving me a broken link. But i'm doing a fresh Moz crawl right now to see if the changes took place or not. so 1. Am I on the right track? 2. How long does it typically take for changes to start to show? 3. Is there anything I should be aware of? any follow up questions just let me know, I'll be following this thread!

    | dig_ad_austin
    0

  • Hello, We have magento 2 extensions website mageants.com since 1 years google every 15 days cached my all pages but suddenly last 15 days my websites pages not cached by google showing me 404 error so go search console check error but din't find any error so I have cached manually fetch and render but still most of pages have same 404 error example page : - https://www.mageants.com/free-gift-for-magento-2.html error :- http://webcache.googleusercontent.com/search?q=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&rlz=1C1CHBD_enIN803IN804&oq=cache%3Ahttps%3A%2F%2Fwww.mageants.com%2Ffree-gift-for-magento-2.html&aqs=chrome..69i57j69i58.1569j0j4&sourceid=chrome&ie=UTF-8 so have any one solutions for this issues

    | vikrantrathore
    0

  • If we were to deploy schema and noindex tags to our website via Google tag manager, would these tags be viewed and respected by other search engines?

    | GregLB
    0

  • Hi, I would like to ask why our website images not indexing in Google. I have shared the following screenshot of the search console.  https://www.screencast.com/t/yKoCBT6Q8Upw Last week (Friday 14 Sept 2018) it was showing 23.5K out 31K were submitted and indexed by Google. But now, it is showing only 1K 😞 Can you please let me know why might this happen, why images are not getting indexed and showing in Google webmaster.

    | 21centuryweb
    0

  • Hi My clients dev has added custom fields for adding hreflang tags to head of pages such as: "Rel Type", "The URL", and "Language Code" Am i right in thinking that until a different language/country version of the site is created these can remain empty or should they still be populated once added say with some sort of global reference or best left blank since will leave the head content global by default ? Also how important is it to add charset to the language code ? since seems optional ? Also this set up is on WP multi-site with Yoast and devs asked me the below: _One thing to note is that Yoast generates its own "canonical" tags - so if _
    _you are going to use hreflang tags and canonical tags then you don't need to _
    _add a canonical using the custom fields I have set up - Yoast has that _
    sorted. _But if you are going down the route of NOT having any canonical tags - and _
    _using a x-defult for the hreflang tags, I will need to try and suppress the _
    _Yoast canonical tag so you can do this. Much depends on your approach and _
    what you think is best. So how do i know if using canonicals or x-default, i take it best simplest to leverage Yoast and hence not add canonicals to custom fields ? Isnt x-default just for indicating language selectors/redirector not specific to 1 region? So long as havnt got those then good to proceed with Yoasts generated canonicals ? Cheers dan

    | Dan-Lawrence
    0

  • Hi All, I have a main domain and 9 different subdirectories for languages,  example: www.example.com/page.html www.example.com/uk/page-uk.html www.example.com/es/page-es.html we are implementing hreflang tags for the languages, but we are thinking to get rid of the dashes on the languages URL: -uk or -es, so it will be: www.example.com/page.html www.example.com/uk/page.html www.example.com/es/page.hrml would this be a problem? to have same page names even if they are in different subdirectories? would we need to add canonical tags, at lease for the main domain URLs? www.kornferry.com/page.html Thank you, Rachel

    | RaquelSaiz
    0

  • Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.

    | Tom3_15
    0

  • I've looked in the robots.txt Tester and I can see 3 warnings: There is a 'syntax not understood' warning for each of these. XML Sitemaps:
    https://www.pkeducation.co.uk/post-sitemap.xml
    https://www.pkeducation.co.uk/sitemap_index.xml How do I fix or reformat these to remove the warnings? Many thanks in advance.
    Jim

    | JamesHancocks1
    0

  • This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
    Disallow: /?mobile=1 Thank you

    | AmandaBridge
    0

  • Hi all, Recently, got an unusual issue with images in Google index. We have more than 1,500 images in our sitemap, but according to Search Console only 273 of those are indexed. If I check Google image search directly, I find more images in index, but still not all of them. For example this post has 28 images and only 17 are indexed in Google image. This is happening to other posts as well. Checked all possible reasons (missing alt, image as background, file size, fetch and render in Search Console), but none of these are relevant in our case. So, everything looks fine, but not all images are in index. Any ideas on this issue? Your feedback is much appreciated, thanks

    | flo_seo
    1

  • Hello! I have a question I've been unable to find a clear answer to. My client's primary domain is a .com with a satisfactorily high DA. My client owns the .org version of its domain (which has a very low DA, I suppose due to inactivity) but has never forwarded it on. For branding/visibility/traffic reasons, I'd like to recommend they set up the .org domain to forward to the .com domain, but I wanted to ask a few questions first: 1. Does forwarding low-value DA domains to high-value DA domains have any negative authority/SEO impact? 2. If the .org domain was to be forwarded, am I correct that an SSL cert is not necessary for it if the .com domain has an SSL cert? Thanks in advance!

    | mollykathariner_ms
    1

  • Hi there - I'm not a SEO so help would be appreciated! Moz is telling me we have a redirect loop but the URLs are the same. https://www.example.com/ to https://www.example.com/ Why is my homepage creating a redirect loop to itself? We use Wordpress and I do not have any redirects listed for our homepage. Could this have something to do with switching to https in April? Thanks, Katherine

    | kmmartin
    0

  • I'm doing some log file analysis and need to run a reverse DNS check to ensure that I'm analysing logs from Google and not any imposters. Is there a command I can use in terminal to do this? If not, whats the best way to verify Googlebot? Thanks

    | daniel-brooks
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.