Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • Hello moz community ! I would to make a special 301 redirection through my htaccess file. I am a total noob concerning regexp and 301 redirection. I would like to redirect(301) this url : http://www.legipermis.com/stages-points/">http://www.legipermis.com/stages-points/</a></p>; yes yes it's in the index of google, this strange url includes the last ; to http://www.legipermis.com/stages-points/ I have already include a canonical tag by security, i would like to remove url with a 301 redirection and by remove this url through GWT (but the removal tool can't "eat' this kind of URL) Please consider the fact that i am not an expert about 301 redirections and regexps. No 301 redirect generator works properly for such a strange URL (which triggers content duplication corrected anyway with canonical tag). Thanks for your help.

    | LegiPermis
    0

  • I keep getting the above error when I'm trying to look at the page optimization for a new blog page we have uploaded. It was posted over a week ago, so I assumed it would be indexed by now. Any thoughts on why it isn't working? The page is: http://www.esg.co.uk/blog/blog/2015/why-is-air-quality-testing-so-important/#.VmlBmLiLRpg Thanks

    | Charley_Tangerine
    0

  • Hi I am getting these show up in WMT crawl error any help would be very much appreciated | ?escaped_fragment=Meditation-find-peace-within/csso/55991bd90cf2efdf74ec3f60 | 404 | 12/5/15 |
    | | 2 | mobile/?escaped_fragment= | 404 | 10/26/15 |
    | | 3 | ?escaped_fragment=Tips-for-a-balanced-lifestyle/csso/1 | 404 | 12/1/15 |
    | | 4 | ?escaped_fragment=My-favorite-yoga-spot/csso/5598e2130cf2585ebcde3b9a | 404 | 12/1/15 |
    | | 5 | ?escaped_fragment=blog/c19s6 | 404 | 11/29/15 |
    | | 6 | ?escaped_fragment=blog/c19s6/Tag/yoga | 404 | 11/30/15 |
    | | 7 | ?escaped_fragment=Inhale-exhale-and-once-again/csso/2 | 404 | 11/27/15 |
    | | 8 | ?escaped_fragment=classes/covl | 404 | 10/29/15 |
    | | 9 | m/?escaped_fragment= | 404 | 10/26/15 |
    | | 10 | ?escaped_fragment=blog/c19s6/Page/1 | 404 | 11/30/15 | |   |

    | ReSEOlve
    0

  • Hello,
    I have a problem with an SEM Rush warning on a website audit, for www.enjoyprepaid.com. It tells me "5852 pages are lacking language declaration", but I don't understand what it means and how to actually fix this problem. Also I run a W3 validator and have a doctype and language problem but again don't understand what they mean and how to fix them https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.enjoyprepaid.com%2FAfghanistan-calling-cards-2.html

    | Silviu
    0

  • New pages on my site usually shoot right into the index - often in under 24 hours.  Lately they are taking weeks to get into the index.  Are your new pages slow to index lately? Thanks for anything that you can report.

    | EGOL
    2

  • Hello,
    I have a problem with an SEM Rush warning on a website audit, for www.enjoyprepaid.com. It tells me "5852 pages are lacking language declaration", but I don't understand what it means and how to actually fix this problem. Also I run a W3 validator and have a doctype and language problem but again don't understand what they mean and how to fix them https://validator.w3.org/nu/?doc=http%3A%2F%2Fwww.enjoyprepaid.com%2FAfghanistan-calling-cards-2.html

    | Silviu
    0

  • I have a client that set a change of address in Google Search Console where they informed Google that their preferred domain was a subdomain, and now they want Google to also consider their base domain (without the change of address). How do I get the change of address in Google search console removed?

    | KatherineWatierOng
    0

  • Hi Folks, I've got a client who has a duplicate content because they actually create duplicate content and store the same piece of content in 2 different places. When they generate this duplicate content, it creates a 2nd link on the site going to the duplicate content. Now they want the 2nd link to always redirect to the first link, but for architecture reasons, they can't remove the 2nd link from the site navigation. We can't use rel-canonical because they don't want visitors going to that 2nd page. Here is my question: Are there any adverse SEO implications to maintaining a link on a site that always redirects to a different page? I've already gone down the road of "don't deliberately create duplicate content" with the client. They've heard me, but won't change. So, what are your thoughts? Thanks!

    | Rock33
    0

  • Hi Guy's, I was wondering if it matters if your hosting servers is set on a different country than where you're website is targeting (.de (germany) website and the server is in france) Thanks!

    | Happy-SEO
    2

  • Hi there, I have a question that I wasn't able to give it a reasonable answer yet, so I'm going to trust on all of you. Basically a site has all its pages indexed by Google (I verified with site:sitename.com) and it also has great and unique content. All on-page grades are A with absolutely no negative factors at all. However its pages do not get impressions almost at all. Of course I didn't expect it to be on page 1 since it has been launched on Dec, 1st, but it looks like Google is ignoring (or giving it bad scores) for some reason. Only things that can contribute to that could be: domain privacy on the domain, redirect from the www to the subdomain we use (we did this because it will be a multi-language site, so we'll assign to each country a subdomain), recency (it has been put online on Dec 1st and the domain is just a couple of months old). Or maybe because we blocked crawlers for a few days before the launch? Exactly a few days before Dec 1st. What do you think? What could be the reason for that? Thanks guys!

    | ruggero
    0

  • We have a whole bunch of websites that all use stock imagery. Some of the images have alt tags on them but some of them do not. To add alt tags to all of them would be a pretty lengthy process. So, would there actually be any value in adding these tags to stock images across the board?

    | chrisvogel
    0

  • A site of ours and its competitors have all seen pretty big drops in domain authority over the past week or so. Just wondering whether this could be something to do with a change in how DA is measured or whether its our backlink profile that is the problem. Any feedback would be much appreciated!

    | SEO_Jellyfish
    0

  • Hi, Google has indexed a few .CSS and .JS files that belong to our WordPress plugins and themes. I had them blocked via robots, but realized this doesn't prevent indexation (and can likely hurt us since Google wants to access these files). I've since removed the robots instructions, submitted a removal request via Search Console, but want to make sure they don't come back. Is there a way to put a noindex tag within .CSS and .JS files? Or should I do something with .htaccess instead?

    | kirmeliux
    1

  • Our client received an email from Bing WMTs  saying "We have identified 4 known issues with your website in Microsoft Edge – the new default browser for Windows 10 and Bing –  Of the four problems mentioned, only two seem to be relevant (maybe) We’ve found that this webpage may include HTML markup that treats Microsoft Edge differently from other modern browsers. The new EdgeHTML rendering engine for Microsoft Edge is document-mode agnostic and designed for fast, modern rendering. We recommend that you implement one code base for all modern browsers and include Microsoft Edge as part of your modern browser test matrix. **We've found that this webpage may have missing vendor-specific prefixes **or may have implemented vendor-specific prefixes when they are not required in common CSS properties. This may cause compatibility problems with how this webpage renders across different browsers. Last month the client received 20K visitors from all IE browsers and this is significant enough to be concerned about. **Are other folks making changes to their code to adapt to MS Edge?  **

    | RosemaryB
    0

  • We are currently using vanity URLs to track offline marketing, the vanity URL is structured as www.clientdomain.com/publication, this URL then is 302 redirected to the actual URL on the website not a custom landing page. The resulting redirected URL looks like:  www.clientdomain.com/xyzpage?utm_source=print&utm_medium=print&utm_campaign=printcampaign. We have started to notice that some of the vanity URLs are being indexed in Google search. To prevent this from happening should we be using a 301 redirect instead of a 302 and will the Google index ignore the utm parameters in the URL that is being 301 redirect to?  If not, any suggestions on how to handle? Thanks,

    | seogirl22
    1

  • Hi, We've noticed a drop in traffic compared to the previous month and the same period last year. We've also noticed a sharp spike in indexed pages (almost doubled) as reported by Search Console. The two seemed to be linked, as the drop in traffic is related to the spike in indexed pages. The only change we made to our site during this period is we reskinned out blog. One of these changes is that we've enable 'normal' (not ajax) pagination. Our blog has a lot of content on, and we have about 550 odd pages of posts. My question is, would this impact the number of pages indexed by Google, and if so could this negatively impact organic traffic? Many thanks, Jason

    | Clickmetrics
    0

  • Hello, I am working on an e-commerce site where there are categories with multiple pages. In order to avoid pagination issues I was thinking of using rel=next and rel=prev and cannonical tags. I noticed a site where the URL doesn't change between pages, so whether you're on page 1,2, or 3 of the same category, the URL doesn't change. Would this be a cleaner way of dealing with pagination?

    | whiteonlySEO
    0

  • Hi there, I have a robots.txt query which I haven't tried before and as we're nearing a big time for sales, I'm hesitant to just roll out to live! Say for example, in my robots.txt I disallow the URL 'example1.html'. In reality, 'example1.html' 301s/302s to 'example2.html'. Would the robots.txt directive also apply to 'example2.html' (disallow) or as it's a separate URL, would the directive be ignored as it's not valid? I have a feeling that as it's a separate URL, the robots disallow directive won't apply. However, just thought I'd sense-check with the community.

    | ecommercebc
    0

  • Hi we have a mobile version of our website at mobile.gardening-services-edinburgh.com its been live for 5, maybe 6 months, it has its own mobile-sitemap.xml have tried submitting this sitemap to google and for some reason it does not index these pages any ideas, most welcome

    | McSEO
    0

  • Good Morning, We have a magento website with hundreds of different products that have slight size variations. The image for each of these products looks the same (the only difference between the products is some of the dimensions) .... Would you recommend using the same image for each of these products and just use a generic file name that describes the overall product or would you give each product its own image with it's specific product name as the file name? Should I use 1 image for 500 different sku's or should i rename the file the name of each individual sku and load an individual image?  The end user will not know the difference since all of the images will appear identical, simply asking from an SEO perspective. Thanks

    | Prime85
    0

  • Hi guys, I wander what your opinion is concerning exclution via the robots.txt file.
    Do you advise to keep using this? For example: User-agent: *
    Disallow: /sale/*
    Disallow: /cart/*
    Disallow: /search/
    Disallow: /account/
    Disallow: /wishlist/* Or do you prefer using the meta tag 'noindex, follow' instead?
    I keep hearing different suggestions.
    I'm just curious what your opinion / suggestion is. Regards,
    Tom Vledder

    | AdenaSEO
    0

  • I have 2 urgent errors saying there are 2 x error code 909's detected. These don't link to any page - but I can tell there is a mistake somewhere - I just don't know what needs changing. http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/printed-promotional-keyrings http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/blank-unassembled-keyrings Could someone help please? screen-shot-2015-08-11-at-13.18.17.png?t=1439292942

    | FullSteamBusiness
    0

  • A lot of pages on our website are currently indexed on both their http:// and https:// URLs. I realise that this is a duplicate content problem, but how major an issue is this in practice? Also, am I right in saying that the best solution would be to use rel canonical tags to highlight the https pages as the canonical versions?

    | RG_SEO
    0

  • The robots file looks like it has been set up in a very messy way.
    I understand the # will comment out a line, does this mean the sitemap would
    not be picked up?
    Disallow: /js/ should this be allowed like /*.js$
    Disallow: /media/wysiwyg/ - this seems to be causing alerts in webmaster tools as it can not access
    the images within.
    Can anyone help me clean this up please #Sitemap: https://examplesite.com/sitemap.xml Crawlers Setup User-agent: *
    Crawl-delay: 10 Allowable Index Mind that Allow is not an official standard Allow: /index.php/blog/
    Allow: /catalog/seo_sitemap/category/ Allow: /catalogsearch/result/ Allow: /media/catalog/ Directories Disallow: /404/
    Disallow: /app/
    Disallow: /cgi-bin/
    Disallow: /downloader/
    Disallow: /errors/
    Disallow: /includes/
    Disallow: /js/
    Disallow: /lib/
    Disallow: /magento/ Disallow: /media/ Disallow: /media/captcha/ Disallow: /media/catalog/ #Disallow: /media/css/
    #Disallow: /media/css_secure/
    Disallow: /media/customer/
    Disallow: /media/dhl/
    Disallow: /media/downloadable/
    Disallow: /media/import/
    #Disallow: /media/js/
    Disallow: /media/pdf/
    Disallow: /media/sales/
    Disallow: /media/tmp/
    Disallow: /media/wysiwyg/
    Disallow: /media/xmlconnect/
    Disallow: /pkginfo/
    Disallow: /report/
    Disallow: /scripts/
    Disallow: /shell/
    #Disallow: /skin/
    Disallow: /stats/
    Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
    Disallow: /catalog/product_compare/
    Disallow: /catalog/category/view/
    Disallow: /catalog/product/view/
    Disallow: /catalog/product/gallery/
    Disallow: */catalog/product/upload/
    Disallow: /catalogsearch/
    Disallow: /checkout/
    Disallow: /control/
    Disallow: /contacts/
    Disallow: /customer/
    Disallow: /customize/
    Disallow: /newsletter/
    Disallow: /poll/
    Disallow: /review/
    Disallow: /sendfriend/
    Disallow: /tag/
    Disallow: /wishlist/ Files Disallow: /cron.php
    Disallow: /cron.sh
    Disallow: /error_log
    Disallow: /install.php
    Disallow: /LICENSE.html
    Disallow: /LICENSE.txt
    Disallow: /LICENSE_AFL.txt
    Disallow: /STATUS.txt
    Disallow: /get.php # Magento 1.5+ Paths (no clean URLs) #Disallow: /.js$
    #Disallow: /.css$
    Disallow: /.php$
    Disallow: /?SID=
    Disallow: /rss*
    Disallow: /*PHPSESSID Disallow: /:
    Disallow: /😘 User-agent: Fatbot
    Disallow: / User-agent: TwengaBot-2.0
    Disallow: /

    | mcwork
    0

  • Currently I have a corporate website that is ranking all right. However, I have some additional domains containing import search terms that I would like to use to get higher rankings for the corporate website, or allow these domains to generate more traffic for the corporate website. What are best practice in using these domains with keyword terms, to make most use of them, for ideally both ranking as well as generating additional traffic. All input is highly appreciated.

    | moojoo
    0

  • How does one markup their site to get the small sitelinks to appear in SERP listings as seen in the example image below? jJiQYy3

    | SelectHub
    0

  • Hi folks, i've got a question: we have about 3 million image sites with unique URL on our site. All images with a caption are transmitted to Google index, which regards 2/3 of all images. We are afraid that this could cause some problems due to thin content. Please take a look at one of our article sites with such a photo gallery: http://goo.gl/hq6bxG All gallery pics with a caption are indexed: http://goo.gl/gd9TQ6 Do you have any advices how to handle those photo galleries? How should they be flaged for Google? Every pic "noindex" and "canonical"-Tag to the article? Thx a lot! Matthias

    | Mulle
    0

  • Is it better to have all your pages point to the www version or non www version.

    | bronxpad
    0

  • Hi, Please see attached photos.  I have a website that shows external follow links when performing a search on open site explorer.  However, they are not recognised or visible in search console.  This is the case for both internal and external links. The internal links are 'no follow' which I am getting developer to rectify. Any ideas why I cant see the 'follow' external links? Thanks in advance to those who help me out. Jesse T7dkL5s T7dkL5s OkQmPL4 3qILHqS

    | jessew
    0

  • I am migrating from hubspot to wordpress. How do i do this so i maintain my previous seo efforts. I have about 10 optimized blog posts. Can i cut and paste them into the wordpress blog or use a 301 url rewrite? Add to domain? Totally clueless! Your Noobian Friend! Jay

    | freshairtech
    0

  • Hello wonderful Moz community!  I need some advice.  Here is the situation:  I work in a small division within a much larger company.  We each have our own domain, i.e. www.parent.com and www.child.com.  We (the child) have a domain authority of 57, while our parent has a domain authority of 86.  Our blog lives on blogs.parent.com/child.  My understanding is that www.brand.com/blogs is better for SEO than blogs.brand.com (we had no control of directory structure decisions at the parent level). Given all that, in terms of maximizing traffic to our domain, would we be better off moving our blog to www.child.com/blogs? Here are a couple of potential pros/cons bouncing around in my newbie brain: a) By moving the blog to our domain, our whole site could benefit from having any external links our blog posts earn point back to our domain vs. our parent's domain. b) On the other hand, leaving the blog on our parent's domain and then linking to our content from posts over there might give our content a boost.  (Of course, that theory is shot down if Google recognizes our parent/child relationship and doesn't reward our site with the benefit of linkbacks coming from our parent domain.) What say you?  Are there other angles to this I’m not even considering?  If you think the right decision is to move the blog over to our site, any suggestions on how not to screw that up?  (301’s, etc.) Thanks in advance for your thoughts! -John

    | jomosi
    0

  • Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL?  For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category:  Fasteners
    Sub-Cat (level 1): Anchors
    Sub-Cat (level 2): Anchor Bolts

    | ROI_DNA
    0

  • Hi, thanks for looking i am trying (and failing) to write a htaccess for the following scenario <colgroup><col width="679"></colgroup> http://www.gardening-services-edinburgh.com/index.html http://www.gardening-services-edinburgh.com http://www.gardening-services-edinburgh.com/ | so that all of these destinations goto the one resource any ideas? thanks andy

    | McSEO
    0

  • We are developing a new website and intent to use Javascript to make the page more manageable on the screen with a show more / show less. For example http://www.tanzaniaodyssey1.com/tanzania/serengeti where a large amount of the content of the page is "hidden". from what i can see although google won't penalise us but am i right in saying that this content will not be indexed and will count for google ???? I need to make sure that this text is indexed but i also don't want to have horrendously long un-user friendly pages, **Does any one have a way round this ? (could i put the words in a <noscript tag="" ...<="" strong=""></p></noscript>**

    | EdD-DigitalPotion
    0

  • Hello Dear MOZers, Having duplicate content issues and I'd like some opinions on how best to deal with this problem. Background: I run a website for a cosmetic surgeon in which the most valuable content area is the section of before/after photos of our patients. We have 200+ pages (one patient per page) and each page has a 'description' block of text and a handful of before and after photos. Photos are labeled with very similar labels patient-to-patient ("before surgery", "after surgery", "during surgery" etc). Currently, each page has a unique rel=canonical tag. But MOZ Crawl Diagnostics has found these pages to be duplicate content of each other. For example, using a 'similar page checker' two of these pages were found to be 97% similar. As far as I understand there are a few ways to deal with this, and I'd like to get your opinions on the best course. Add 150+ more words to each description text block Prevent indexing of patient pages with robots.txt Set the rel=canonical for each patient page to the main gallery page Any other options or suggestions? Please keep in mind that this is our most valuable content, so I would be reluctant to make major structural changes, or changes that would result in any decrease in traffic to these pages. Thank you folks, Ethan

    | BernsteinMedicalNYC
    0

  • Most of my webpages aren't crawled by google.
    Why is that and what can i do to make google index at least most of my webpages?

    | Poutokas
    0

  • I've been working under the assumption for some time that if I have two (or more) pages which are very similar that I can add a noindex tag to the pages I don't need and that will reduce duplicate content. As far as I know this removes the pages with the tag from Google's index and stops any potential issues with duplicate content. It's the second part of that assumption that i'm now questioning. Despite pages having the noindex tag they continue to appear in Google Search console as duplicate content, soft 404 etc. That is, new pages are appearing regularly that I know to have the noindex tag. My thoughts on this so far are that Google can still crawl these pages (although won't index them) so shows them in GSC due to a crude issue flagging process. I mainly want to know: a) Is the actual Google algorithm sophisticated enough to ignore these pages even through GSC doesn't. b) How do I explain this to a client.

    | ChrisJFoster
    0

  • Couple of schema questions: Should I 'only' mark up the contact page, as this has the most information? What about the header and footer, should I tag everything there also? If I do mark up the header, footer, and contact page, I end up with 3 "LocalBusiness" entries in Google testing tool, is that bad?

    | MichaelGregory
    0

  • I am rebuilding our website together with an agency and I am stuck with the following problem: We have a page which will provide the visitor with a quick and convincing impression why he should chose our enterprise. On this page we want to show our USPs (Unique Selling Points) each with a title and a short description. Now my preferred way of presenting those USPs would be of a list of the titles (which permits to see all USPs without having to read a lot of text) where each title can be clicked to expand the description (in case you want to know more about this specific USP) and if you click on another title the previously clicked title description will collapse and the new description expand and so on (similar to this page: http://www.berlin-city-immobilien.de/38.html - I'm talking about the list in the middle of the page starting with the headline "Dabei profitieren Sie von folgenden Vorteilen"). Since I also want to use these descriptions as on page SEO-texts I checked whether Google might not index or at least value "click to expand content" less than plain text in the body of the page and I stumbled over this article: https://www.seroundtable.com/google-hidden-tab-content-seo-19489.html. According to this article Google will definitely discount the descriptions on my page. Does anyone have an idea how to solve this problem? Either by suggesting a different way to show titles and descriptions on the page or maybe by suggesting a workaround so Google will not treat the descriptions as "click to expand text". Thank you already in advance for your input.
    Ben

    | Benni
    0

  • So recently Webmaster tools showed a million "not found" errors with the url "plugins/feedback.php/blah blah blah." A little googling helped me find that this comes from the Facebook comment box plugin. Apparently some changes recently have made this start happening. The question is, what's the right fix? The thread I was reading suggested adding "Disallow: /plugins/feedback.php" to the robots.txt file and marking them all fixed. Any ideas?

    | cbrant777
    0

  • My site has tons of summary pages,  Whether for a PDF download, a landing page or for an article.  There is a summary page, that explains the asset and contains a link to the actual asset. My question is that if the summary page is just  summary of an article with a "click here to read full article" button,  Should I set the rel canonical on the summary page to go to the full article? Thanks,

    | Autoboof
    0

  • Hello, This coming weekend we will be changing our http sites to https versions. I have a very quick question regarding Google Search Console. Because the migration is happening over a weekend, we want to get as much as possible setup beforehand. Is there any risk to adding the new properties to the search console without the sites being live yet? I want to deliver the Search Console verify files to our IT team in advance for them to add to the site, and then once I get the okay that the migration went successfully, I would go into the Search Console and click on the Verify button to get the sites verified and of course, then fetch as Google to help speed up indexing a bit and ensure there are no errors. Any insight on this would be greatly appreciated! Amiee

    | Amiee
    0

  • Hi, TL;DR: Does https://schema.org/BreadcrumbList work? It's been some time since I last implemented schema.org markup for breadcrumbs. Back then the situation was that google explicitly discouraged the use of the schema.org markup for breadcrumbs. In my experience it had been pretty hit or miss - sometimes it worked without issues; sometimes it did not work without obvious reason. Consequently, I ditched it for the data-vocabulary.org markup which did not give me any issues. However, I prefer using schema.org and currently a new site is being designed for a client. Thus, I'd like to use schema.org markup for the breadcrumb - but of course only if it works now. Google has dropped the previous warning/discouragements and by now lists a schema.org code https://developers.google.com/structured-data/breadcrumbs based on the new-ish https://schema.org/BreadcrumbList. Has anybody here used this markup on a site (preferably more than one) and can confirm whether or not it is reliably working and showing the breadcrumb trail / site hierarchy in the SERP? Thanks for your answers! Nico

    | netzkern_AG
    0

  • My modest website (shew-design.com) has pulled up nearly sixty crawl errors. Almost all of them are missing meta description tags. One friend who knows SEO better than me says that adding meta tags to EVERY page is not a good use of time. My site is available at shew-design.com I'm just getting started in being serious about applying SEO to our site and I want to make sure I'm making the best use of my time. The other error I'm getting are duplicate page names within different directories (e.g. getting started (for branding), getting started (for web). Is this a huge priority? Would welcome your feedback.

    | Eric_Shew
    0

  • Is it bad to have the same H1 & H2 tag on one page? I found a similar question here on the moz forum but it didn't exactly answer my question. And will adding "about" on the H2 help, or should we avoid duplicate tags completely? Here is a link to the page in question (which will repeat throughout this site.) Thanks in advance!

    | Mike.Bean
    0

  • Hi guys, Just wondering what your processes are when moving a large site to a completely new URL structure on the same domain. Do you 301 everything from old page to new page, or are your more selective - i.e. only 301 pages that have a certain page authority, for example. Thanks!

    | A_Q
    0

  • Hi All, Thanks in advance for any help that you can offer in regards to this. I have been conducted a bit of analysis of our server access file to see what googlebot is doing, where it is going etc. Now firstly, I am not SEO but have an interest. What I am seeing a lot of is that we have URL's that have an extension that sets the currency that is displayed on the products so that we can conduct Adwords campaigns in other countries, these show as follows: feedurl=AUD, feedurl=USD, feedurl=EUR etc. What I can see is that google bot is hitting a URL such as /some_product, then /someproduct?feedurl=USD and then /someproduct?feedurl=EUR and then /someproduct?feedurl=AUD all after each other. Now this is the same product page and just has the price shown slightly different on each. Would this count as a duplicate content issue? Should I disavow feedurl? Any assistance that you can offer would be greatly appreciated. Thanks, Tim

    | timsilver
    0

  • Is Schema(.org) markup meant specifically to be used on text? Or can you use it in a similar way that you can use Open Graph Protocol? For example, for awhile I've been using something like this on my site: Because it's in the head section, it appears on every page. In review, this seems to be an incorrect use? Should I only be using Schema to mark specific text? If not, what are the consequences of using Schema like this?

    | eglove
    0

  • One of our clients (a children's clothing company) has split their categories (outwear, tops, shoes) between boys and girls - There's one category page for girls outwear, and one category for boys outwear. I am suspecting that this redundant categorisation is diluting link juice and rankings for the related search queries. Important points: The clothes themselves are rather gender-neutral, girl's sweaters don't differ that much from the boy's sweaters. Our keyword research indicates that norwegians' search queries are also pretty gender neutral - people are generally searching after "children's dresses", "shoes for kids", "snowsuits", etc. So these gender specific categories are not really reflective of people's search behavior. I acknowledge that implementing a filter for "boys" and "girls" would be the best way to solve this redundant categorization, but that would simply be to expensive for our client. I'm thinking that some sort of canonicalisation would be the best approach to solve this issue. Are there any other suggestions or comments to this?

    | Inevo
    0

  • I am looking to find all the missing alt tags on my site at once. I have a FF extension that use to do it page by page, but my site is huge and that will take forever. Thanks!!

    | franchisesolutions
    1

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.