Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • Hello! I am continuing to have issues with the bulk upload option.Currently, there are 12 non-verified locations in a location group in my GMB account. I have approximately 6-8 more that need to be added to this group via bulk upload. When uploading the spreadsheet, I receive an error reading "You've exceeded the limit for the about of locations you can upload to Google My Business in a single day. Try again later." It seems to happen specifically to the locations that aren't in my GMB account already. The others, the ones already in the account, are fine and simply read "No updates" when the bulk upload sheet is read. Everything else is marked as an error. Why is it marking some listings as nonviable when they come in via the bulk verification spreadsheet, which has been downloaded directly from the links Google has provided, and filled in with the help of the sample and amenities list?How do we finish uploading all of the remaining locations?I have another group, separate group (same company, groups split into US and International) under my name that may also need a bulk upload - what can I do to avoid this error in the future? Can they still be bulk uploaded to my account after I upload the first location group's listings?If you could provide any guidance, I'd be very grateful.Thanks in advance!

    | kmarsh
    0

  • I have a private/login site where all pages are noindex, nofollow. Can I still monitor external site links with Google Analytics?

    | jasmine.silver
    0

  • Does anyone know the linking of hashtags on Wix sites does it negatively or positively impact SEO.  It is coming up as an error in site crawls 'Pages with 404 errors'  Anyone got any experience please? For example at the bottom of this blog post https://www.poppyandperle.com/post/face-painting-a-global-language the hashtags are linked, but they don't go to a page, they go to search results of all other blogs using that hashtag.  Seems a bit of a strange approach to me.

    | Mediaholix
    0

  • I've had someone tell me that google doesn't pay attention to H3 tags -- only H1 and H2.  I haven't found much online to back this up or discredit it; thought I'd ask the Moz community!

    | LivDetrick
    5

  • I have noticed Google image search has included suggestion tabs (e.g,. design, nature... when searching background) on the top of the image search.
    Are there specific meta tags I can add into my images so that my images will show up on each tab?
    Do those filters just show content based on image keywords or something else? IRme7gQ

    | Mike555
    0

  • Hi, Along with the title of each page, a Wordpress client has their brand name as a H1 on every single page. This is situated in the footer and just sits within the company info/address. Should these tags be removed, leaving just the page titles as H1s? Cheers, Lewis

    | PeaSoupDigital
    0

  • Hi Over several years I used many tags ( more then 1000 ) on my wordpress website 😞 but most of them haven't  any view and haven't any clicks on google search . now I want delete this old - useless - unused tags but I'm worried about seo problem like many 404 pages and problems like this . Does anyone have safe way to delete these wordpress tags? how can i safely remove them?

    | markdoel
    0

  • Hi! Our company is trying to come up  with a few pages with some manuals to teach our users how to use our products. However, these pages require username and password. My understanding is that user engagement will help a website's keyword rankings and Google will not be able to crawl or have access to pages requiring login as it doesn't have username and password. Based on that idea, does that mean all the content and user engagement on those pages requiring login won't help our overall SEO? Thanks in advance!

    | EverettChen
    0

  • Hi Team, Google search console used to allow you to use their structured data markup helperhttps://www.google.com/webmasters/markup-helper/u/0/ to markup multiple product pages at once that were similar. I do not see this feature anymore with the new search console. Does anyone have a recommendation for marking up multiple product pages without having to have schema markup firing in GTM for each product page?

    | agrier
    0

  • I have over 18,000 tags. Needless to say, most of them are relatively useless to the user and generate no traffic, while cluttering the site. (I use Wordpress.) My plan is to delete tags, but I want to do so safely as to not accumulate website errors. (Tags pages are noindexed.) What process should I take here? Here was my basic plan (any help is appreciated). 1. Find irrelevant tags that are connected with hardly any posts. 2. Go into the post, and remove said tag. 3. Now, with a tag having a 'count' of 0, I go into Tags, and delete it. Safe, right? But now it seems those tag pages just turned into 404s "Uh-oh...Page not found!" Where do I go from here? Create 410's? Thanks Mike

    | naturalsociety
    0

  • Can I safely delete unused tags, ones with 0 posts connected to them? Thanks, Mike

    | naturalsociety
    0

  • I have a site that will have 90% of the content behind a metered paywall. So all content is accessible in a metered way. All users who aren't logged in will have access to 3 articles (of any kind) in a 30 day period. If they try to access more in a 30 day period they will hit a paywall. I was reading this article here on how to handle structured data with Google for content behind a paywall: https://www.searchenginejournal.com/paywalls-seo-strategy/311359/However, the content is not ALWAYS behind a paywall, since it is metered. So if a new user comes to the site, they can see the article (regardless of what it is). Is there a different way to handle content that will be SOMETIMES behind a paywall bc of a metered strategy? Theoretically I want 100% of the content indexed and accessible in SERPs, it will just be accessible depending on the user's history (cookies) with the site. I hope that makes sense.

    | triveraseo
    0

  • Hi there! I wanted to remove a duplicated site from the google index. I've read that you can do this by removing the URL from Google Search console and, although I can't find it in Google Search console, Google keeps on showing the site on SERPs. So I wanted to add a "no index" meta tag to the code of the site however I've only found out how to do this for individual pages, can you do the same for a entire site? How can I do it? Thank you for your help in advance! L

    | Chris_Wright
    1

  • Hello all, We have a sitemap with URLs that have mostly user generated content. Profile Overview section. Where users write about their services and some other things. Out of 46K URLs, only 14K are valid according to search console and 32K URLs are excluded. Out of these 32K, 28K are "Discovered - currently not indexed". We can't really update these pages as they have user generated content. However we do want to leverage all these pages to help us in our SEO. So the question is how do we make all of these pages indexable? If anyone can help in the regard, please let me know. Thanks!

    | akashkandari
    0

  • Hello, community! My client wants to use the barba.js plugin for their new site. What are the implications for SEO?

    | SimpleSearch
    0

  • **Hi,****I am looking to improve the aesthetics of some pages on my website by adding written content into collapsible tabs. I was wondering whether the content that is ‘hidden’ by tabs is given less weight by Google from the perspective of SEO? **Some articles I have read suggest that tabbed content is weighted equally with the content which is already immediately visible to the user, but others suggest that this may not be the case. **Please, can I request opinions on the matter? Any advice would be greatly appreciated, many thanks.**Katarina

    | Katarina-Borovska
    0

  • hey, just wanna know  does <base> in head of website affect SEO? and if it's a yes, how?

    | m1700
    1

  • Our sitemap is showing in Google's SERP. While it's only for very specific queries that don't seem to have much value (it's a healthcare website and when a doctor who isn't with us is search with the brand name so 'John Smith Brand,' it shows if there's a first or last name that matches the query), is there a way to not make the sitemap indexed so it's not showing in the SERP. I've seen the "x-robots-tag: noindex" as a possible option, but before taking any action wanted to see if this was still true and if it would work.

    | Kyleroe95
    0

  • I am trying to put together a comprehensive list of all pages that are indexed in Google and have differing opinions on how to do this.

    | SpodekandCo
    0

  • For a client we would like to work with a content hub, but their website is build on a custom CMS so we are limited in our options and if we aks their web developers they ask crazy prices to help us.  So now we have the idea to build the content hub with wordpress and implement it next to their current CMS. for example on www.website.com/contenthub/ . As far as i know this is technically possible and there are no negative effects regarding SEO as long as we link the two sitemaps together. Am i right or am i missing something here?

    | Siphoplait
    0

  • Regarding Google's new max-snippet:[number], this is specifically for the length of the meta description? Also, does anyone know what the max character length is?

    | SoulSurfer8
    1

  • Hi Moz Community, I've recently created several new pages on my site using much of the same copy from blog posts on the same topics (we did this for design flexibility and a few other reasons). The blogs and pages aren't exactly identical, as the new pages have much more content, but I don't think there's a point to having both and I don't want to have duplicate content, so we've used 301 redirects from the old blog posts to the new pages of the same topic. My question is: can I go ahead and delete the old blog posts? (Or would there be any reasons I shouldn't delete them?) I'm guessing with the 301 redirects, all will be well in the world and I can just delete the old posts, but I wanted to triple check to make sure. Thanks so much for your feedback, I really appreciate it!

    | TaraLP
    1

  • Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?

    | LivDetrick
    0

  • We have 2 separate sites for desktop (www.example.com) and mobile (m.example.com) As per the guideline, we have added Rel=alternate tag on www.example.com to point to mobile URL(m.example.com) and Rel=canonical tag on m.example.com to point to Desktop site(www.example.com).However, i didn't find any guideline on what canonical tag we should add ifFor Desktop sitewww.example.com/PageA - has a canonical tag to www.example.com/PageBOn this page, we have a Rel=alternate tag m.example.com/pageAWhat will be the canonical we should add for the mobile version of Page Am.example.com/PageA - Canonical tag point to www.example.com/PageA -or www.example.com/PageB?Kalpesh

    | kguard
    0

  • The site in question uses Wordpress. They have a Resources section that is broken into two categories (A or B). Underneath each of these categories is 5 or 6 subcategories. The structure looks like this: /p/main-category-a/subcategory/blog-post-name /p/main-category-b/subcategory/blog-post-name All posts have a main category, but other posts often have multiple subcategories while some posts also fall into both main categories. What would be the easiest or most effective way to auto-populate the breadcrumb based on from where the person reached the blog post? So for example, a way to set Home -> Main Category -> Subcategory 1 as the breadcrumb if they reach it from the Subcategory 1 landing page. Or is this not possible and we should just set the breadcrumb manually based on where we feel it best lives? Thanks.

    | Alces
    0

  • All of our URLs in Google My Business are tagged with ?utm_source=gmb. This way when people click on it within a Google Map listing, knowledge graph, etc we know it came from there. I'm assuming using a canonical on all ?_utm_source _pages (we have others, including some in the index) won't cause any problems with this, correct? Since they're not technically traditional organic SERPs? Dumb question I know, but better safe than sorry. Thanks.

    | Alces
    1

  • Hello all, I've spent the past day scouring guides and walkthroughs and advice and Q&As regarding this (including on here), and while I'm pretty confident in my approach to this query, I wanted to crowd source some advice in case I might be way off base. I'll start by saying that Technical SEO is arguably my weakest area, so please bear with me. Anyhoozles, onto the question (and advance apologies for being vague): PROBLEM I'm working on a website that, in part, works with providers of a service to open their own programs/centers. Most programs tend to run their own events, which leads to an influx of Event pages, almost all of which are indexed. At my last count, there were approximately 800 indexed Event pages. The problem? Almost all of these have expired, leading to a little bit of index bloat. THINGS TO CONSIDER A spot check revealed that traffic for each Event occurs for about a two-to-four week period then disappears completely once the Event expires. About half of these indexed Event pages redirect to a new page. So the indexed URL will be /events/name-of-event but will redirect to /state/city/events/name-of-event. QUESTIONS I'M ASKING How do we address all these old events that provide no real value to the user? What should a future process look like to prevent this from happening? MY SOLUTION Step 1: Add a noindex to each of the currently-expired Event pages. Since some of these pages have link equity (one event had 8 unique links pointing to it), I don't want to just 404 all of them, and redirecting them doesn't seem like a good idea since one of the goals is to reduce the number of indexed pages that provide no value to users. Step 2: Remove all of the expired Event pages from the Sitemap and resubmit. This is an ongoing process due to a variety of factors, so we'd wrap this up into a complete sitemap overhaul for the client. We would also be removing the Events from the website so there are not internal links pointing to them. Step 3: Write a rule (well, have their developers write a rule) that automatically adds noindex to each Event page once it's expired. Step 4: Wait for Google to re-crawl the site and hopefully remove the expired Events from its index. Thoughts? I feel like this is the simplest way to get things done quickly while preventing future expired events from being indexed. All of this is part of a bigger project involving the overhaul of the way Events are linked to on the website (since we wouldn't be 404ing them, I would simply suggest that they be removed entirely from all navigation), but ultimately, automating the process once we get this concern cleaned up is the direction I want to go. Thanks. Eager to hear all your thoughts.

    | Alces
    0

  • Now there is some conflicting beliefs here and I want to know what you think. If I got a high spam website to remove my backlink, is a disavow through search console still necessary ? Keep in mind if it helps even in the slightest to improve rankings im for it!

    | Colemckeon
    1

  • If I search for my client's phone number on Google, without gaps, ie 02036315541,  another company comes up at the top of the list. This company has a similar name to ours, but it is in a different town and it does different things. My company name is Energy Contract Renewals https://www.energycontractrenewals.co.uk/ and their company is https://energyrenewals.co.uk. As far as I can see, the other company does not mention our phone number anywhere on their site or on their GMB page so I don't know why they are coming up.  We do not come up at all for this search. However,  if I put our phone number in like this: 020 3631 5541, our company does come up and the other company does not. Anyone know how I can correct this or if it is even possible to do something about it?

    | mfrgolfgti
    1

  • I do not have a sku, global identifier, rating or offer for my product. Nonetheless it is my product. The price is variable (as it's insurance) so it would be inappropriate to provide a high or low price. Therefore, these items were not included in my product schema. SD Testing tool showed 2 warnings, for missing sku and global identifier. Google Search Console gave me an error today that said:  'offers, review, or aggregateRating should be specified' I don't want to be dishonest in supplying any of these, but I also don't want to have my page deprecated in the search results. BUT I DO want my item to show up as a product. Should I forget the product schema? Advice/suggestions? Thanks in advance.

    | RoxBrock
    1

  • Hi , my website is opening with IP too.  i think its duplicate content for google...only home page is opening with ip, no other pages,  how can i fix it?,  might be using .htaccess i am able to do...but don't know proper code for this...this website is on wordpress platform... Thanks Ramesh

    | unibiz
    0

  • Hi, I know its best practice to redirect a website from http to https, instead of having many entry point to your website. When a website has been running for a long time on http and https, what are the SEO Pros and Cons of implementing a redirect from Http to Https?

    | FreddyKgapza
    1

  • a new page template was created the plan is to publish the new page (which has the same URL as before) to web and delete the old page that has the URL ,  will that have an SEO implications ?

    | lina_digital
    1

  • Hello, Our company is international and we are looking to gain more traffic specifically from Europe. While I am aware that translating content into local languages, targeting local keywords, and gaining more European links will improve rankings, I am curious if it is worthwhile to have a company.eu domain in addition to our company.com domain. Assuming the website's content and domain will be exactly the same, with the TLD (.eu vs .com) being the only change - will this add us benefit or will it hurt us by creating duplicate content - even if we create a separate GSC property for it with localized targeting and hreflang tags? Also - if we have multiple languages on our .eu website, can different paths have differing hreflangs? IE: company.eu/blog/german-content German hreflang and company.eu/blog/Italian-content Italian hreflang. I should note - we do not currently have an hreflang attribute set on our website as content has always been correctly served to US-based English speaking users - we do have the United States targeted in Google Search Console though. It would be ideal to target countries by subfolder rather if it is just as useful. Otherwise, we would essentially be maintaining two sites. Thanks!

    | Tom3_15
    0

  • How is the best way to handle all the different variations of a website in terms of www | non www | http | https? In Google Search Console, I have all 4 versions and I have selected a preference. In Open Site Explorer I can see that the www and non www versions are treated differently with one group of links pointing to each version of the same page. This gives a different PA score. eg. http://mydomain.com DA 25 PA 35 http://www.mydomain.com DA 19 PA 21 Each version of the home page having it's only set of links and scores. Should I try and "consolidate" all the scores into one page? Should I set up redirects to my preferred version of the website? Thanks in advance

    | I.AM.Strategist
    0

  • I'm working on some htaccess redirects for a few stray pages and have come across a few different varieties of 301s that are confusing me a bit....Most sources suggest: Redirect 301 /pageA.html http://www.site.com/pageB.html or using some combination of: RewriteRule + RewriteCond + RegEx I've also found examples of: RedirectPermanent /pageA.html http://www.site.com/pageB.html I'm confused because our current htaccess file has quite a few (working) redirects that look like this: Redirect permanent /pageA.html http://www.site.com/pageB.html This syntax seems to work, but I'm yet to find another Redirect permanent in the wild, only examples of Redirect 301 or RedirectPermanent Is there any difference between these? Would I benefit at all from replacing Redirect permanent with Redirect 301?

    | SamKlep
    1

  • I'm implementing Schema.org, (JSON-LD), on an eCommerce site. Each product has a few different variations, and these variations can change the price, (think T-shirts, but blue & white cost $5, red is $5.50, and yellow is $6). In my Schema.org markup, (using JSON-LD), in each Product's Offer, I could either have a single Offer with a price range, (minPricd: $5, maxPrice $6), or I could add a separate Offer for each variation, each with its own, correct, price set. Is one of these better than the other? Why? I've been looking at the WooCommerce code and they seem to do the single offer with a price range, but that could be because it's more flexible for a system that's used by millions of people.

    | 4RS_John
    1

  • Hello everyone, I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points: The website is based on a subdirectory URL ( example.com/ja/ ) The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji. Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed). When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either. My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this. When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded. I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps? Many thanks everyone for any advice!

    | Hermski
    0

  • Is it okay to leave HTML entity characters, such as " in meta descriptions? Will search engines translate these appropriately?

    | ellenu
    0

  • Hi guys, I am the webmaster of the following two websites: www.gpblog.com/nl
    www.gpblog.com/en The first URL is the Dutch version of GPBlog, the second URL is the UK version of GPblog. Whenever a person visits www.gpblog.com he gets redirected to either the Dutch version or the UK version based on his location. My question is: is it harmful to have 1. your homepage on a subfolder and 2. is it harmful to run two different languages on one domain using this technique? Thank you in advance!

    | NielsDE
    1

  • I wonder if someone can help me understand clearly page rank flow. If we have a website with a Home page, Services, About and Contact as a very basic website and the page rank will flow to each of those pages from the Home page (i'm not including internal linking between pages or anchor text from the home page content - this is a question purely about home page flow via the main navigation). If the Services page had 3 drop down pages. Would the home page rank also flow to each of these or is it going to the Services page which then distributes it to the three drop down. So instead of Home page rank flowing to 3 pages 33% each - it is flowing to 6 pages 16.6% each. Or is it flowing to 3 pages - 33.3% then the Services pages get a third of 33.3% ->10.1% I know this is simplifying it all a great deal- but it is the basic concept I am trying to grasp on this simple example. Thanks

    | AL123al
    0

  • I am a Magician and have multiple location pages for each county I cover. I currently have them linked off the menu under locations/ <county>and also in the footer</county> However I have heard that a link from the page is much stronger, so I am experimenting with removing the Menu & Footer link and just linking to these pages from within the content. It's not really a navigation item and most people come in through search to the right page. Am I diluting the link by having it in the Menu/Page and Footer? I read a long time ago that Google only considers the first link to a page and ignores the rest - is that the case? Thanks Roger https://www.rogerlapin.co.uk/

    | Rogerperk
    0

  • There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.

    | ramb
    0

  • Hi, I was implementing JSON-LD via Google Tag Manager but have since read that it is better to place directly on the web page. My questions are: Where exactly do you add the JSON-LD - I mean physically?  This article by Moz says the head or body. Does it matter where in the head? At the end? If you do not have development access- is there an easy way to insert? Can you view the added JSON-LD directly on the web page after you have added it or is it not visible in the source once you have added it this way ? If you cannot see it, how do you view? Or it it just OK to use the Google Structured Testing tool as evidence that it is being seen by Google? Thanks

    | AL123al
    1

  • My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?

    | Nomader
    1

  • What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI

    | Digital_Reach
    2

  • Hey guys, I've always believed that search pages should be no-indexed but now I'm wondering if there is an argument to index them? Appreciate any thoughts!

    | RebekahVP
    0

  • Hi, I have a lot of issues popping up with temporary redirects and redirect chains. I'm still confused as to what exactly redirect chains are and I don't know how to find where the "chains" are or how to fix them. I'm having two issues mainly:1. Temporary RedirectsI have around 100 pages on our www.twowayradiosfor.com website that are being flagged as temporary redirects. All of them have one thing in common: they are review pages (basically, when a customer clicks on the Review button to review a certain product, they are redirected to a review page for that product).URL Example:  https://www.twowayradiosfor.com/reviewhelpful.asp?ProductCode=CLS1410-COMBO&ID=44&yes=noI went into our website and set any URL containing the following as noindex:/review.aspWill that fix the issue? If yes, will I also need to do that for any URL containing /reviewhelpful.asp?2. Redirect ChainsIt seems like basically every product page on my website has this issue (over 100 pages). Here's an example of one:https://www.twowayradiosfor.com/Motorola-CLS1110-p/cls1110.htmI don't see any broken links on this page or links that redirect to another page that redirects, etc. What is causing this? Is it something on my header bar that is redirecting (since that header bar appears on every page, maybe that is why this issue shows up on a lot of pages)?I am new to Moz and still trying to figure this stuff out. I really appreciate any help. Thanks, Sawyer

    | AllChargedUp
    0

  • Hey MOZ Community, I am looking for some help in identifying where the following meta description is coming from on this home page  - https://www.apins.com. I have scrubbed through the page source without being able to locate where the content is being pulled from. The website is built on WordPress and metas were updated using Yoast, but I am wondering if an installed plugin could be the culprit. On top of this, I have had a developer take a look for the "hack" and they have assured that the issue has been removed. I have submitted the URL in GSC a couple of times to be re-indexed but have not had much luck. Any thoughts would be much appreciated, the displayed description is below. The health screening plays http://buyviagraonlineccm.com/ a significant and key role in detecting potentially life-threatening illnesses such as cancer, heart ...

    | jordankremer
    0

  • Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
    de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
    en.domain.com
    /de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code  -  we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
    We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂

    | TheHecksler
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.