Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • People spent **more than twice as much time looking at the left side **of the page and that is the reason it's good practice to place your important links in left side.

    | vivekrathore
    0

  • Hello, In the last month, I noticed a huge spike in the number of pages indexed on my site, which I think is impacting my SEO quality score. While I've only have about 90 pages on my site map, the number of pages indexed jumped to 446, with about 536 pages being blocked by robots.  At first we thought this might be due to duplicate product pages showing up in different categories on my site, but we added something to our robot.txt file to not index those pages.  But the number has not gone down.  I've tried to consult with our hosting vendor, but no one seems to be concerned or have any idea why there was such a big jump in the last month. Any insights or pointers would be so greatly appreciated, so that I can fix/improve my SEO as quickly as possible! Thanks!

    | Saison
    0

  • Hi, I was told we have way too many 301 redirects on our site. We have some that have been there for 3 years. Our site is datacard.com . Question- how long should you keep a redirect out there when building a new page and expiring an old page? Is it 6 months, is it a certain time frame? wondering what the best practices are? Thanks! Laura

    | lauramrobinson32
    0

  • We have a large and old site. As we've transition from one CMS to another, there's been a need for create 301 redirects using our ht access file. I'm not a technical SEO person, but concerned that the size of our ht access file might be contributing source for long page download times. Can large ht access files cause slow page load times?  Or is the coding of the 301 redirect a cause for slow page downloads? Thanks

    | ahw
    1

  • Hello all, I'm a new SEOer and I'm currently trying to navigate the layman's minefield that is trying to understand duplicate content issues in as best I can. I'm working on a website at the moment where there's a duplicate content issue with blog archives/categories/tags etc. I was planning to beat this by implementing a noindex meta tag on those pages where there are duplicate content issues. Before I go ahead with this I thought: "Hey, these Moz guys seem to know what they're doing! What would Rand do?" Blogs on the website in question appear in full and in date order relating to the tag/category/what-have-you creating the duplicate content problem. Much like Rand's blog here at Moz - I thought I'd have a look at the source code to see how it was dealt with. My amateur eyes could find nothing to help answer this question: E.g. Both the following URLs appear in SERPs (using site:moz,com and very targeted keywords, but they're there): https://moz.rankious.com/_moz/rand/does-making-a-website-mobile-friendly-have-a-universally-positive-impact-on-mobile-traffic/ https://moz.rankious.com/_moz/rand/category/moz/ Both pages have a rel="canonical" pointing to themselves. I can understand why he wouldn't be fussed about the category not ranking, but the blog? Is this not having a negative effect? I'm just a little confused as there are so many conflicting "best practice" tips out there - and now after digging around in the source code on Rand's blog I'm more confused than ever! Any help much appreciated, Thanks

    | sbridle
    1

  • Website was changed over to secure HTTP about twp months ago. Just looked in Google Webmaster Tools and it only shows about 8 inbound links. We did a permanent 301 redirect for all URLs. There are over 800 links according to Open Site Explorer. Is it just that they are showing only the HTTPS inbound links?  Should I add the HTTPS version in WMT? Thanks for any assistance

    | EBI
    0

  • hey there, I am faceing problem with dublicate contents because of limitstart (yup..i am using joomla) I can get rid off these duplicates by adding canonical but there are a lot (about 100 duplicates) I am thinking to add to my robots.txt the bellow code: Disallow: /?limitstart= Will do this the trick and get rid off all these duplicates errors? Thanks in advance

    | tsalatzi
    0

  • Often we have web platforms that have a default URL structure that looks something like this www.widgetcompany.co.uk/widget-gallery/coloured-widgets/red-widgets This format is quite well structured but would it just be more effective to be www.widgetcompany.co.uk/red-widgets? I realise that it may depend on a lot of factors but generally is it better to have the shorter URL if targeting the key phrase "red widgets" One thing, it certainly looks a bit keyword stuffy with all those "widgets"

    | vital_hike
    0

  • So my understanding is that you can use site: [page url without http] to check if a page is indexed by Google, is this 100% reliable though? Just recently Ive worked on a few pages that have not shown up when Ive checked them using site: but they do show up when using info: and also show their cached versions, also the rest of the site and pages above it (the url I was checking was quite deep) are indexed just fine. What does this mean? thank you p.s I do not have WMT or GA access for these sites

    | linklander
    0

  • I have GoDaddy website builder and a new website http://ecuadorvisapros.com and I notices through your crawl test that there are 3 home pages http://ecuadorvisapros with a 302 temporary redirect, http://www.ecuadorvisapros.com/ with no redirect and http://www.ecuadorvisapros/home.html.  GoDaddy says there is only one home page.  Is this going to kill my chances of having a successful website and can this be fixed?  Or can it.  I actually went with the SEO version thinking it would be better, but it wants to auto change my settings that I worked so hard at with your sites help. Please keep it simple, I am a novice although I have had websites in the past I know more about the what's than the how's of websites. Thanks,

    | ScottR.
    0

  • Hi Everyone, I am creating a list of 301 redirects to give to a developer to put into Magento. I used Screaming Frog to crawl the site, but I have noticed that all of their urls 302 to another page. I am wondering if I should 301 the first URL to the url on the new site, or the second. I am thinking the first, but would love some confirmation. Thank you!

    | mrbobland
    0

  • So I know you can use dashes and | in meta tags, but can anyone tell me what other punctuation you can use? Also, it'd be great to know what punctuation you can't use. Thanks!

    | Trevorneo
    1

  • We're launching a blog on a sub-domain of a corp site (blog.corpsite.com). We already have corpsite.com set up in the Search Console. Should I set up a separate property for this sub-domain in the Search Console (WMT) in order to manage it? Is it necessary? Thanks, JM

    | HeroDesignStudio
    0

  • Hi, We have migrated to a new domain name and I wrote my redirects as follows: Redirect 301 / http://www.healthpointe.net Redirect 301 /urgent_care_locations.shtml http://www.healthpointe.net/healthpointe-locations/ Redirect 301 /locations.shtml http://www.healthpointe.net/healthpointe-locations/ Redirect 301 /career_client_relations_rep.shtml http://www.healthpointe.net/careers/ My issue is that when I include the first redirect, which is to the main page of the website that the other redirects stop working. Any idea what the problem could be?

    | healthpointeseo
    0

  • Hello, Facing a Strange issue, wordpress blog hghscience[dot]com was hacked by someone, when checked, I found index.php file was changed & it was showing some page with a hacked message, & also index.html file was added to the cpanel account.All pages were showing same message, when I found it, I replaced index.php to default wordpress index.php file & deleted index.htmlI could not find any other file which was looking suspicious. Site started working fine & it was also indexed but cached version was that hacked page. I used webmaster tool to fetch & render it as google bot & submitted for indexing. After that I noticed home page get deindexed by google. Rest all pages are indexing like before. Site was hacked around 30th July & I fixed it on 1st Aug. Since then home page is not getting indexed, I tried to fetch & index multiple time via google webmasters tool but no luck as of now. 1 More thing I Noticed, When I used info:mysite.com on google, its showing some other hacked site ( www.whatsmyreferer.com/ ) When Searching from India But when same info:mysite.com is searched from US a different hacked site is showing ( sigaretamogilev.by )However when I search "mysite.com" my site home page is appearing on google search but when I check cached URL its showing hacked sites mentioned above.As per my knowledge I checked all SEO Plugins, Codes of homepage, can't find anything which is not letting the homepage indexed.PS: webmaster tool has received no warning etc for penalty or malware. I also noticed I disallowed index.php file via robots.txt earlier but now I even removed that. 7Dj1Q0w.png 3krfp9K.png

    | killthebillion
    0

  • Hi, Currently we have a page, /business, but we have shifted our strategy to optimize for this page for the keyword "enterprise" instead of "business".  The page authority of this page is 18 and our domain authority is 35. I've already updated content and title tags to more of an enterprise focus. Would it be wise to move the page to /enterprise and create a 301 redirect from /business to /enterprise?  Or is this too risky from an SEO standpoint? Thanks!

    | mikekeeper
    0

  • We are wanting to refresh old posts on our blog and schedule them to get “republished” to the home page of our blog on a future date. However, when we edit the “Published” settings date and set it to a future date, the post gets removed from the blog and put back into a “Scheduled” status. Many of these posts are “evergreen” and bringing in traffic so we don’t want to have them get removed from our site. So to recap, we need the ability to be able to reschedule already published posts to get “re-published” back on our home page without having them get removed from the site. Does anyone know of a plugin or solution to this problem? Any help would be appreciated.

    | eyepaq
    0

  • My domain is: http://www.freescrabbledictionary.com/ "Scrabble Dictionary" is a huge keyword in my niche where I used to rank top 4. Do you see this domain as possibly being hit by the EMD? My Google Analytics does not show that I was initially hit back in Sept 2012 when it first same out.

    | cbielich
    0

  • I already have an xml sitemap, so I've been researching how to create an html sitemap with over 10,000 urls for an ecommerce website. Any program, paid or unpaid, just needs to be created so it looks good to put in the footer of our website.

    | ntsupply
    0

  • Good Morning Mozzers... We just recently launched a brand new site and now the fun part begins: trying to get it to appear in the SERPS. I'm wondering if you guys can share your best and most proven secrets/tricks to get brand new sites to rank in Google.... For example, what are the first directories you add the site to?  What are some links you try and aquire first? Looking for some tips and ideas for a brand new site. Thanks in advance.

    | Prime85
    0

  • Hi guys. I've got an e-commerce site which we have very little control over. As such, we've created a subdomain and are hosting a WordPress install there, instead. This means that all the great content we're putting out (via bespoke pages on the subdomain) are less effective than if they were on the main domain. I've looked at proxy forwarding, but unfortunately it isn't possible through our servers, leaving the only option I can see being permenant redirects... What would be the best solution given the limitations of the root site? I'm thinking of wildcard rewrite rules (eg. link site.com/blog/articleTitle to blog.site.com/articleTitle) but I'm wondering if there's much of an SEO benefit in doing this? Thanks in advance for everyone's help 🙂

    | JAR897
    0

  • Does the use of cufon for H-tags et al affect SEO/how Google views your website?

    | Alligator
    0

  • Hi, I am a wedding photographer based in Liverpool. I have been trying to do my own SEO for the last 6 months. I have been hovering around the top of page two for the main search terms for the past few years. I used an SEO company before christmas who got a lot of spammy links which resulted in my site dropping to page 4 of the SERPS. With the help of this forum I managed to locate them and disavow those links, and have tried to do it myself. I have managed to gain a few "featured weddings" on national wedding blogs and wrote a few articles for another wedding blog and also some forum comments. I have also got a  few links for example from a wedding band in exchange for some photographs. I have got onto page 1 about 4 times, the best result was at position 6 on page 1 but every time I have slowly dropped out again. I have methodically (once a month) checked for any of the spammy links and updated the disavow list. My competitors have at best old forum comments and the like and on checking their websites with open site explorer are not actively link building at all. I have just checked my Webmaster tools and google is only recognising 51 links. (none of my good wedding blog links are there) I have an external links csv from the 28th June with 602 links on it. I changed my website around May of this year but it is still on the same domain name www.dwliverpoolphotography.co.uk. Can anybody help? Best wishes. David.

    | WallerD
    0

  • Hello, I wanted to ask this from long time but finally i gathered my energy to ask this long question at moz. Well, like almost all newbies with little knowledge of SEO & google I started my first blog in 2009, as things were very different that time & with posting more and more, I was getting good results & started to build decent traffic but with poor content ( I really din't care about it ) as I was getting organic traffic. But things changed with Google Panda Completely after 11th April 2011, Since the time Traffic keep on falling, I never made backlinks so Penguin Updates never hit us but because of Poor & thin Content Site went down lower & lower. I took some steps like increasing word count of posts, removing some posts but nothing worked so far but nothing worked. Blog has almost 1200 articles & most important it was my first blog so I was bit attached with it. Now my Question is, Should I just dispose the blog & move on or There is something which I can try to recover it. The blog is 6 years old as of of now & received 2 million organic traffic as of now. ( attached organic Traffic screenshot ) My question is, Can something be done Seriously for this blog or I should just let it go. I will appreciate some genuine advice on that. Thanks ZhS1xIS.png

    | killthebillion
    0

  • I am trying to get things to match up for the company brand websearch and the Google + page and we have had it for years now The knowledge graph on Google is showing the map, address and name (shown in attached image), but is not linked to a G+ page, as when i click the "Are you the business owner?" its is trying to make me create a new G+ business page. Anyone have any ideas on this? Also does the wiki name have to be exact for it to show? As for phone number would that be coming from the DNS record as that is nowhere in the markup rich snippet or normal markup Thanks in advance LC9cWdG

    | David-McGawn
    0

  • This may sound crazy but is it possible a hamburger nav used on a desktop responsive site could lead to a drop in organic traffic? Our hamburger nav requires between 3-4 clicks to arrive on a subpage. Here's an example of necessary clicks for a given page: click hamburger menu2) resorts (expands resorts nav) city (expands various cities where our resorts are located) actual resort (this opens up resort menu) resort overview (first clickable link) Is there any way we're getting penalized for excessive number of clicks? All our old pages were 301 redirected and content is relatively the same on the new redesigned website. Thanks

    | SoulSurfer8
    0

  • Hello, After crawling our site Moz is detecting high priority duplicate page content for our product and article listing pages, For example  http://store.bmiresearch.com/bangladesh/power and http://store.bmiresearch.com/newzealand/power are being listed as duplicate pages although they have seperate URLs, page titles and H1 tags. They have the same product listed but I would have thought the differentiation in other areas would be sufficient for these to not be deemed as duplicate pages. Is it likely this issue will be impacting on our search rankings? If so are there any recommendations as to how this issue can be overcome. Thanks

    | carlsutherland
    0

  • Hi Mozzers, I have a large service based website, which seems to be losing pages within Google's index. Whilst working on the site, I noticed that there are a number of xml sitemaps for each of the services. So I submitted them to webmaster tools last Friday (14th) and when I left they were "pending". On returning to the office today, they all appear to have been successfully processed on either the 15th or 17th and I can see the following data: 13/08 - Submitted=0 Indexed=0
    14/08 - Submitted=606,733 Indexed=122,243
    15/08 - Submitted=606,733 Indexed=494,651
    16/08 - Submitted=606,733 Indexed=517,527
    17/08 - Submitted=606,733 Indexed=517,498 Question 1: The indexed pages on 14th of 122,243 - Is this how many pages were previously indexed? Before Google processed the sitemaps? As they were not marked processed until 15th and 17th? Question 2: The indexed pages are already slipping, I'm working on fixing the site by reducing pages and improving internal structure and content, which I'm hoping will fix the crawling issue. But how often will Google crawl these XML sitemaps? Thanks in advance for any help.

    | Silkstream
    0

  • Hi All Having a robots.txt looking like the below will this stop Google crawling the site User-agent: *

    | internetsalesdrive
    0

  • Hello Everyone, I need Start work on for this site www.tajsigma.com, I want to know Did i need Add any things for Good ranking. what are the things that need update in This site For SEO??
    i know Basic Things robots, title, discription , But i want know Any Additional Things that I need to apply?? can anyone help, please Thanx in advance

    | falguniinnovative
    0

  • I work for a company that has many promotions throughout the year, some big, some HUGE.  Typically they have created a landing page for this content.  The issue is, when this promotion ends, we will kill the landing page, thus 404ing the backlinks and putting the page authority in purgatory. (1) What would be the best way the keep these pages organized? I was thinking about creating a main "Promotions" page with the current promotion on it (the previous ones linked on the bottom of the page).  Then when the promotion ends I would copy those contents and add them to a new page and link to it from the original "promotions" page. An issue I see with this is that the promotions page would always have the same Title Tag and vanity URL. (2) This could provide many links to the "promotions" page over time to build it's authority, but would constantly changing content hurt ranking factors?

    | nat88han
    0

  • Fetch as Google often says that some of my stylesheets and js files are temporarily unreachable. Is that a problem for SEO? These stylesheets and scripts aren't blocked and Search Consoles show that a normal user would see the page just fine.

    | WebGain
    0

  • There are so many 302 redirected links you found among which most are for the pages which needs users to login to view the pages so redirection to login page is unavoidable. For example: https://www.stopwobble.com/wishlist/index/add/product/98199/form_key/QE0kEzOF2yO3DTtt/ Also we don't have product compare functionlity, but still there are so many links from compare page which redirects to respective category page. For exammple: http://www.stopwobble.com/catalog/product_compare/add/product/98199/uenc/aHR0cDovL3d3dy5zdG9wd29iYmxlLmNvbS93b2JibGUtd2VkZ2Vz/form_key/QE0kEzOF2yO3DTtt/ We need to know from where Moz crawler is detecting these links so that we can supress them from being crawled. I already tries to review overall site and confirmed these links nowhere exists in page source or in sitemap.xml

    | torbett
    0

  • I have hired two website people from odesk and had bad experience in 301 redirects. One never got it done and the other cause my website to crash on a 500 error code, he load all the redirects in a script on .htaccess on my wordpress site on godaddy, there were 1400 redirect.Can somebody recommend a expert for my redirect problem.

    | tjacquet
    0

  • Hello again Mozzers, I am debating what could be a fairly drastic change to the company website and I would appreciate your thoughts. The URL structure is currently as follows Product Pages
    www.url.co.uk/product.html Category Pages
    www.url.co.uk/products/category/subcategory.html I am debating removing the /products/ section as i feel it doesn't really add much and lengthens the url with a pointless word. This does mean however redirecting about 50-60 pages on the website, is this worth it? Would it do more damage than good? Am i just being a bit OCD and it wont really have an impact? As always, thanks for the input

    | ATP
    0

  • Lately I've noticed more and more 503/504 errors being flagged in my MOZ reports. One week I had over 1300 errors show up. I checked Google Webmaster Tools and Bing Webmaster tools and noticed they were showing up in there too, although not near as many (50 or less per day). I contacted my hosting company about it and they said these were normal and that it was due to one nameserver reaching capacity, but that there was a backup nameserver that kicks in. I've seen one or two of these errors show up before, but never more than one or two a week. Is this something I should be concerned about?

    | Kyle Eaves
    0

  • Hi Mozers, We are working on a website for a UK charity – they are a hospice and have two distinct brands, one for their adult services and another for their children’s services. They currently have two different websites which have a large number of pages that contain identical text. We spoke with them and agreed that it would be better to combine the websites under one URL – that way a number of the duplicate pages could be reduced as they are relevant to both brands. What seamed like a good idea initially is beginning to not look so good now. We had planned to use CSS to load different style sheets for each brand – depending on the referring URL (adult / Child) the page would display the appropriate branding. This will will work well up to a point. What we can’t work out is how to style the page if it is the initial landing page – the brands are quite different and we need to get this right. It is not such an issue for the management type pages (board of trustees etc) as they govern both identities. The issue is the donation, fundraising pages – they need to be found, and we are concerned that users will be confused if one of those pages is the initial landing page and they are served the wrong brand. We have thought of making one page the main page and using rel canonical on the other one, but that will affect its ability to be found in the search engines. Really not sure what the best way to move forward would be, any suggestions / guidance would be much appreciated. Thanks Fraser .

    | fraserhannah
    0

  • I recently started getting email notifications from Google re: new products on our websites. I am subscribed to Google alerts. Can anyone shed some light on this?

    | AMHC
    0

  • Dear friends, I have a videos portal. I created a video sitemap.xml and submit in to GWT but after 20 days it has not been indexed. I have verified in  bing webmaster as well. All videos are dynamically being fetched from server. My all static pages have been indexed but not videos. Please help me where am I doing the mistake. There are no separate pages for single videos. All the content is dynamically coming from server. Please help me. your answers will be more appreciated................. Thanks

    | docbeans
    0

  • Hi, I've read several articles about the correct process for moving a blog from a subdomain to the main root domain but am not quite 100% sure as to what to do in our scenario. They were hosting their blog on Hubspot which puts the blog on a sub-domain "blog.rootdomain.com". Realizing it isn't benefiting the main website for SEO they want to move it to the main website. I understand we have to redirect the Hubspot "blog." pages to the new "rootdomain.com/blog" pages but when transferred over (it's a WordPress site) it shows the dates. So, the URL is "rootdomain.com/blog/year/month/title". They want to remove the date. Does that mean the URL must be re-written then redirected so that there's no date showing? There's over 300 posts which will have to be redirected from the Hubspot URLs. Is there a way to avoid setting up the second redirect to remove the dates or make it easier so it isn't one page at a time?

    | Flock.Media
    0

  • Greetings to the fellow Moz community members! On an e-commerce site, I am using a script to change the default currency of storefront based on IP detection ( GBP for UK visitors, CAD for Canadian visitors and so on). My question is : can this create any problems at all in Google Crawling or Indexing? Will google be able to understand the setup? I don't think this should trigger the "cloaking" or presenting different content to search engines vs users, but just want to double check from the collective wisdom here. Thanks for reading, and wish you a good day ahead. Warm Regards Amit

    | amitgg
    0

  • Hi We are currently in the process of making changes to our travel site where by if someone does a search this information can be stored and also if the user needs to can take the URL and paste into their browser at find that search again. The url will be dynamic for every search, so in order to stop duplicate content I wanted ask what would be the best approach to create the URLS. ** An example of the URL is: ** package-search/holidays/hotelFilters/?depart=LGW&arrival=BJV&sdate=20150812&edate=20150819&adult=2&child=0&infant=0&fsearch=first&directf=false&nights=7&tsdate=&rooms=1&r1a=2&r1c=0&r1i=0&&dest=3&desid=1&rating=&htype=all&btype=all&filter=no&page=1 I wanted to know if people have previous experience in something like this and what would be the best option for SEO. Will we need to create the URL with a # ( As i read this stops google crawling after the #) Block the folder IN ROBOTS is there any other areas I should be aware of in order stop any duplicate content and 404 pages once the URL/HOLIDAY SEARCH is no longer valid. thanks E

    | Direct_Ram
    0

  • I made some changes in my website after that I try webmaster tool FETCH AS GOOGLE but this is 2nd day and my new pages does not index www. astrologersktantrik .com

    | ramansaab
    0

  • Hello friends, I am finding it difficult to get the following page indexed on search: http://www.niyati.sg/mobile-app-cost.htm It was uploaded over two weeks back. For indexing and trouble shooting, we have already done the following activities: The page is hyperlinked from the site's inner pages and few external websites and Google+ Submitted to Google (through the Submit URL option) Used the 'Fetch and Render' and 'Submit to index' options on Search Console (WMT) Added the URL on both HTML and XML Sitemaps Checked for any crawl errors or Google penalty (page and site level) on Search Console Checked Meta tags, Robots.txt and .htaccess files for any blocking Any idea what may have gone wrong? Thanks in advance!
    Ramesh Nair

    | RameshNair
    0

  • I know how to run site:domain.com but I am looking for software that will put these results into a list and return server status (200, 404, etc). Anyone have any tips?

    | InfinityTechnologySolutions
    0

  • Hello, Does anyone ever had a problem with display none portion of a page made for accessibility? (Jaws reader/ NVDA) Thank You.

    | Vale7
    0

  • Hi, I have a big problem. For the past month, my company website has been scrape by hackers. This is how they do it: 1. Hack un-monitored and/or sites that are still using old version of wordpress or other out of the box CMS. 2. Created Spam pages with links to my pages plus plant trojan horse and script to automatically grab resources from my server. Some sites where directly uploaded with pages from my sites. 3. Pages created with title, keywords and description which consists of my company brand name. 4. Using http-referrer to redirect google search results to competitor sites. What I have done currently: 1. Block identified site's IP in my WAF. This prevented those hacked sites to grab resources from my site via scripts. 2.  Reach out to webmasters and hosting companies to remove those affected sites. Currently it's not quite effective as many of the sites has no webmaster. Only a few hosting company respond promptly. Some don't even reply after a week. Problem now is: When I realized about this issue, there were already hundreds if not thousands of sites which has been used by the hacker. Literally tens of thousands of sites has been crawled by google and the hacked or scripted pages with my company brand title, keywords, description has already being index by google. Routinely everyday I am removing and disavowing. But it's just so much of them now indexed by Google. Question: 1. What is the best way now moving forward for me to resolve this? 2. Disavow links and domain. Does disavowing a domain = all the links from the same domain are disavow? 3. Can anyone recommend me SEO company which dealt with such issue before and successfully rectified similar issues? Note: SEAGM is company branded keyword 5CGkSYM.png

    | ahming777
    0

  • About a year ago we started using Bazaarvoice to get reviews for our products, and it has been great as far as accumulating content, but Google is not taking the schema.org data and displaying it on the SERP. Someone has told me it is because we are offering multiple products, or that our schema.org tags are incorrect but when I compare our code to other travel sites it seems like everyone is doing something different. This is especially annoying since the Google schema markup check says everything is fine. Does anyone have any advice or similar experiences? Thanks.

    | tripcentral
    0

  • I'm currently developing my brothers new website and taking care of the SEO. He provides roofing services and uPVC fascias, soffits & guttering service. He is looking to target multiple towns and cities within a region (Yorkshire). Each service has its own page but I'm wondering if it would be better to create a service page for each town with different content? It's quite difficult to re-write the service content for each town and not repeat yourself. For example, we're looking to target "roofer in leeds" "roofer in sheffield" "roofing services wakefield" etc etc Obviously it's more difficult to rank outside your physical town as the registered address is on Google maps but with content and link building we should see some results. I look forward to hearing some feedback.

    | Jseddon92
    0

  • I am working on a project for a client that has two ecommerce sites each with several thousands of products. Site A has a strong DA, is ranking well on Google for thousands of competitive keywords and generating high traffic and conversions. Site B has a poor DA, ranking poorly and much less traffic. We are considering the idea of merging the 5,000+ product pages from site B into site A. How can we evaluate whether this would be a wise move with the least risk to site A?

    | richdan
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.