Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • What is the best way to handle interstitial ads for the search engines? We saw in Google Webmaster Tools a big spike in 403 errors all from interstitial ad URLs. Would noindex work? Blocking the /interstitial in our robots.txt file? Will that affect the search engines from moving past the ad URL.

    | bonnierSEO
    0

  • I know that SEOmoz says: "if deleted, you will not be able to get it back. So be careful!" from here: http://www.seomoz.org/help/campaign-settings, but we really need it back, and it's urgent! Have anyone ever recovered the deleted campaign? I just send an email to SEOmoz help team, but it's 6 PM now...  Can anyone help? thanks!

    | shikotanaka
    0

  • Hi is there any tools out there which can enable me to find pages which both serve www and non-www versions of each URL

    | monster99
    0

  • I run a site www.rent.com.au. We are a realestate portal that is dedicated to the rental market across Australia. We have been adding more content to our site and now have close to 80% of all realestate agent listings in Australia. Over the past few months we have spent time on: Canonicalisation of our search result and property pages Updating titles and descriptions to match key terms Adding a internal link structure to promote the capital cities of Australia Yet despite this work there are sites which are much smaller, have less traffic, and haven't changed for years outranking us for terms like: rental properties perth houses for rent perth One of these site is rentfind.com.au. Looking at the competitive domain analysis between us and rentfind we are better in every measurement. I have compared our backlinks and there isn't too much difference with who is linking to us. My conclusion is that there is something wrong with the way we are organising our content for search engines. I think that it could be because of our search drill-down from our home page, but before I do anything drastic it would be good to get some feedback on what we could be doing wrong.

    | Rent
    0

  • Background information: We have a website (devicelock.com) which is currently our corporate website. The company use to operate under (ntutility.com) which is now being redirected to devicelock.com via a DNS Forward - 302 Redirect. The IT admin (a founder of the company) is reluctant to change it to a 301. The current flow is ntutility.com redirects to protect-me.com then redirects again to devicelock.com. When i search up Devicelock on google, it shows up as ntutlity.com. There is no devicelock.com homepage on google search. Question: Are there any negative implications about this? Is this hurting our SEO in any way? When i do link building, will this have any negative affects? Will my links for devicelock be attributed to devicelock.com?

    | Devicelock
    0

  • Hello, My site is being checked for errors by the PRO dashboard thing you get here and some odd duplicate content errors have appeared. Every page has a duplicate because you can see the page and the page/~username so... www.short-hairstyles.com is the same as www.short-hairstyles.com/~wwwshor I don't know if this is a problem or how the crawler found this (i'm sure I have never linked to it). But I'd like to know how to prevent it in case it is a problem if anyone knows please? Ian

    | jwdl
    0

  • Hi all, I operate a Dutch website (sneeuwsporter.nl), the website is a a database of European ski resorts and accommodations (hotels, chalets etc). We launched about a month ago with a database of about 1700+ accommodations. Of every accommodation we collected general information like what village it is in, how far it is from the city centre and how many stars it has. This information is shown in a list on the right of each page (e.g. http://www.sneeuwsporter.nl/oostenrijk/zillertal-3000/mayrhofen/appartementen-meckyheim/). In addition a text of this accomodation is auto generated based on some of the properties that are also in the list (like distance, stars etc). Below the paragraph about the accommodation is a paragraph about the village the accommodation is located in, this is a general text that is the same with all the accommodations in this village. Below that is a general text about the resort area, this text is also identical on all the accommodation pages in the area. So a lot of these texts about the village and area are used many times on different pages. Things went well at first and every day we got more Google traffic, and more and more pages. But a few days ago our organic traffic took a near 100% dive, we are hardly listed anymore and if we are at very low places. We expect the Google gave us a penalty. We expect this to be the case because of 2 reasons: we have auto generated text that only vary slightly per page we re-use the content about villages and area's on many pages We quickly removed the content of the villages and resort area's because we are pretty sure that this is definitely something Google does not want. We are less sure about the auto generated content, is this something we should remove as well? These are normal readable text, they just happen to be structured more or less the same way on every page. Finally, when we made these and maybe some other fixes, what is the best and quickest ways to let Google see us again and show them we improved? Thanks in advance!

    | sneeuwsporter
    0

  • I have a DNN site, which I created friendly URL's for; however, the creation of the friendly URL's then created duplicate page content and titles.  I was able to fix all but two URL's with rel="canonical" links.  BUT The two that are giving me the most issues are pointing to my homepage.  When I added the rel = "canonical" link the page then becomes not indexable.  And for whatever reason, I can't add a 301 redirect to the homepage because it then gives me "can't display webpage" error message.  I am new to SEO and to DNN, so any help would be greatly appreciated.

    | VeronicaCFowler
    0

  • Hi, We recently moved to shopify and noticed that our home page is now .co.uk/ (with backslash) instead of .co.uk As both are returning 200 ok I am concerned that this could cause a duplicate content issue. Could anyone please advice on the best way to fix this? Thanks, Paul

    | devoted2vintage
    0

  • Hi I have just came across some URL's from the previous web designer and the site structure has now changed. There are some links on the web however that are still pointing at the old deep weblinks. Without having to contact each site it there a way to automatically sort the links from the old structure www.mydomain.com/show/english/index.aspx to just www.mydomain.com Many Thanks

    | ocelot
    0

  • We have thousands of pages we're trying to have de-indexed in Google for months now.  They've all got . But they simply will not go away in the SERPs. Here is just one example.... http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had  for many months.  This is just one example for thousands of pages, that will not get de-indexed.  Am I missing something here?  Does it have to do with using content="none" instead of content="noindex, follow"? Any help is very much appreciated.

    | MadeLoud
    0

  • The objective I have is to archive an entire blog (which I no longer have time to keep up) with multiple posts over  4years , into another blog as a a folder. My question: would it be quicker and easier to do a rel canonical, or separately list all pages in htaccess and do a 301 redirect.

    | charlesgrimm
    0

  • Hi, I have a question regarding Google - for  a site I am working on I cannot see Instant Preview, in my SERPS and also in Google Webmaster there is no blocked robot txt file and I can't figure out why I have screenshots for all my other sites? If anyone can help much obliged. L This is the site http://apexgenerators.co.uk/

    | lauratagdigital
    0

  • If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?

    | Dan-Lawrence
    0

  • I use Go Daddy Website Tonight. I keep getting a severe health message in Google Webmaster tools stating that my robots.txt file is blocking some important page. When I try to get more details the blocked file will not open. When I asked the Go Daddy peeps they told me that it was just image and backup files that do not need to be crawled. But if Google spiders keep thinking an important page is blocked will this hurt my SERPS?

    | VictorVC
    0

  • Hi! Is it preferable for the "time spent downloading a page" in Google webmaster tools to be high or low?  I've noticed that this metric rapidly decreased after I moved my site to WP Engine and I'm trying to figure out if it's a good or bad thing. Thanks! Jodi QK8dp QK8dp

    | JodiFTM
    0

  • Hi, I have just submitted a sitmap for one website. But I am getting this warning: Number of children in this Sitemap index 3
    Sitemap contains urls which are blocked by robots.txt.Sitemap: www.zemtube.com/videoscategory-sitemap.xmlValue: http://www.zemtube.com/videoscategory/exclusive/www.zemtube.com/videoscategory-sitemap.xmlValue: http://www.zemtube.com/videoscategory/featured/www.zemtube.com/videoscategory-sitemap.xmlValue: http://www.zemtube.com/videoscategory/other/It is a wordpress website and the robots.txt file is:# Exclude Files From All Robots: User-agent: *
    Disallow: /wp-admin/
    Disallow: /wp-includes/
    Disallow: /tag/ End robots.txt file#I have also tried adding this to the robots.txtSitemap: http://www.zemtube.com/sitemap_index.xmlWebmaster-Tools-Sitemaps-httpwww.zemtube.com_.pdf

    | knockmyheart
    0

  • I have been working on EMDs and they are more than 40 EMDs. All 2 KW EMDs with DA around 35 Now they all have same templates. Can it be a problem in future ? (though they don't have similar content)

    | Personnel_Concept
    0

  • So i have this domain (1)devicelock.com and i also had this other domain (2)ntutility.com, the 2nd domain was an old domain and it is not in use anymore. But when i search for devicelock on Google, the homepage devicelock.com does not exist. Only ntutility.com comes up. I asked one of the developer how the redirect is happening from the old domain to the new one and he told me its through a DNS forward. And there is no way to have an .htacess file to set up a 301 instead. Please help!

    | Devicelock
    0

  • Hi. I'm working on a wordpress site, which got some old deleted pages indexed and now shows a 404 (also in the results) As these old pages earlier got content and probably also some links pointing towards it, what would then be best practice to do? Should i make an 301 redirect? Make the 404 noindex?

    | Mickelp
    0

  • I came across this code for tracking PDF files [1. map.pdf  ( name of PDF file )  and files is the folder name. Am i right ? 2. What shall i be able to track using the code given above ? a ) No. of clicks on links or  how many persons downloaded the PDF files ? 3. Where in Google this report will be visible ? Thanks a lot.](http://www.example.com/files/map.pdf)

    | seoug_2005
    0

  • Was looking at Google webmasters earlier today and I noticed that links are shown from my own IP address. Really weird. Any clue on how to fix that?

    | EricMoore
    0

  • I've been running a crawl of one of our new site builds for a couple of weeks. The Diagnostics picked up a couple of issues, which was great, but it's saying we're missing Page Titles and Descriptions on pages that we have Page Titles and Descriptions. Anyone come across this before?

    | niamhomahony
    0

  • Hi, Google Webmaster tools sent me a few messages recently about the jump in the number of 'not found' errors. From 0 to 290 errors, ouch. I know what it's from but I think Google is seeing things.  We developed another page/subdomain we're working on with links back to the root domain. Basically a complete list of articles page that lists each article and links back to the root domain. Not sure what Google is crawling but the links that would result in a 'not found' error aren't there. Will these disappear over time? Thanks for the help!

    | astahl11
    0

  • A few months ago, my site was shut down by BlueHost because of performance issues, so I moved it to WP Engine, and cleaned up most of the plug-ins.  Since then, my search engine traffic has decreased over 50%.  Does switching web hosts hurt SEO? Thanks!

    | JodiFTM
    0

  • I am getting duplicate content warnings on the SEOMOZ crawl. I don't know where the content is duplicated. Is there a site that will find duplicate content?

    | JML1179
    0

  • Adding author markup to the homepage or to SEO optimised sales landing pages is possible. However it doesn't really seem to be using the feature in the spirit of it's purpose. It makes sense for blog posts. It's possible for other pages and will likely improve CTRs from SERPs. But is it against the spirit of it's purpose?

    | designquotes
    0

  • Hey there! I am writing up an SEO plan for our company and wanted to get the groups input on the use of some SEO terms.  I need to organize and explain these efforts to nonSEO people. I usually talk about, SEO in terms of  "Internal" vs "External" efforts. Internal SEO efforts being things like Title Tags, Description Tags, Page Speed, Minimizing errors, proper 301 redirect, content development for the site, internal linking and anchor, etc. External SEO efforts being things like Link building, social media profile setups and posts (FB Twitter Pinterest, YouTube), PR work. How do you split these out?  What terms do you use? Do you subdivide these tasks?  What terms do you use? For example, with Internal, I sometimes talk about  "Technical SEO" that has do to with making sure that site speed is working well, 301s are setup correctly, noindex tag etc are all used properly.  These are things that different versus "On Page" efforts to use keywords properly etc. I will also use the term "Site Visibility" for non SEOs to explain the technical impact.  For example, if your site has the wrong robots.txt, if you have 500 errors everywhere and a slow site, if you are sending spiders down a daisy chain of 301s, it is difficult for the key parts of your site to be found and so your "Visibility" to the engines are poor.  You have to get your visibility up, before you begin to then worry about if you have the right keywords on a page etc. Any input or references would be appreciated.

    | CleverPhD
    0

  • I'm looking for some guidance/expert opinions on using a proxy server with Wordpress. When a consumer goes to ourwebsite.com/blog, our IT department would like to set up the request to be “proxied” to the Wordpress Blog site. They would like to add  a header  to the web request to identify that traffic as coming from through the proper URL.  Should someone or a crawler attempt to access the WordPress site directly (blog.ourwebsite.com) they would be client side redirected to the proper URL ourwebsite.com/blog. This is WAY out of my league here, so I figured I would ask the experts. Will this negatively effect our SEO?

    | SavikaTilakhdin
    0

  • Good Morning Mozzers, I have a question regarding a new linking strategy I'm trying to implement at my organization. We publish 'digital news magazines' that oftentimes have in-text links that point to external sites. More recently, the editorial department and me (SEO) conferred on some ways to reduce our bounce rate and increase time on page. One of the suggestions I offered is to add the 'target=_blank" attribute to all the links so that site visitors don't necessarily have to leave the site in order to view the link. It has, however, come to my attention that this can have some very negative effects on my SEO program, most notably, (fake or inaccurate) time(s) on-page. Is this an advisable way to create in-text links? Are there any other negative effects that I can expect from implementing such a strategy?

    | NiallSmith
    0

  • We have a product that when started was under the domain appnowgo.com.  We've since changed the name and the domain is now knackhq.com.  The latter domain doesn't rank nearly as well as the former for many of the keywords we are targeting.   For example... "online database builder" and "web app builder" are two of those keywords.  Obviously having app in the domain is not a bad thing but it is our old name. The question is, should we 301 the appnowgo.com domain to knackhq.com?   Or should we use that better rank and just link users to knackhq.com from the appnowgo.com site until we can increase our ranking for knackhq.com?  We don't plan to update the content on appnowgo.com anymore and we obviously don't want to drop off rank if at all possible. Thanks! Eric

    | sitestrux
    0

  • Need help urgently. There is the situation [This is how is it working now]: 1. Have a global landing page [say when user types in www.mysite.com - takes user to the global landing page: [www.mysite.com/global/en.html]](http://www.mysite.com/global/en.html] ) 2. Users from this landing page can select a country on his/her choice and get redirected say: [www.mysite.com/us/en.html] Would like to change the functionality as below: 1. When user types in www.mysite.com 1a. Would find the location of the request based on GEO IP and if the request is coming from North America region then would redirect the users to: www.mysite.com/us/en.html 1b. If the request is from any other location/region then it would continue to work as it is currently working: take the user to the global landing page: www.mysite.com/global/en.html Would this change have any negative impact or not found by search engines from SEO perspective? If it does then what are the impacts and if does not then why not. If it does then what is the best possible way to address this request. Appriciate your help. Thanks, Koushik Roy

    | KoushikRoy
    0

  • Hello All, I'm having an issue where the my UserVoice account is creating duplicate page content (image attached). Any ideas on how to resolve the problem? A couple solutions we're looking into: moving the uservoice content inside the app, so it won't get crawled, but that's all we got for now. Thank you very much for your time any insight would be helpful. Sincerely,
    Jon Birdsong SalesLoft duplicate duplicate

    | JonnyBird1
    0

  • I recently joined a team with a site without a) Great content b) Not much of any search traffic I looked and all their url's are built in this way: Normal looking link -> not actually a new page but # like: /#content-title And it has no h1 tag. Page doesn't refresh. My initial thought is to gut the site and build it in wordpress, but first have to ask, is there a way to make a site with /#/ content loading friendly to search engines?

    | andrewhyde
    0

  • I am cleaning up a clients link profile and am coming across a lot of directories (no surprise) My question is, if an obvious free for all generic directory doesn't look to have been hit by any updates is it a wise move recommending it for removal on the basis that it is a free for all directory and could be hit in teh future?

    | fazza47
    0

  • Hi, I understand that 'whitespace' in source code is absolutely fine and is stripped out. For example the following code is fine: Red Apples some text However, how is whitespace interpreted INSIDE html tags such as H1's? My Dev team have instructions to add H1's to a page, however they have done so like this: Red Apples (37 characters long) rather than this: Red Apples (10 characters long) Do you think this extra space will be harmful? The browser renders it fine, however if you use something like the mozbar plugin is shows the H1 length as 37 characters. I know the 10 character H1 is 100% relevant to the search term "Red Apples", however is the 37 character H1 only 27% relevant? (10/37) I've made the request to the Dev team to remove this whitespace because I'd rather err on the side of caution, but its been knocked back because the HTML spec specifies consecutive white-space should be interpreted as a  single space and all browsers build the DOM by trimming a tags value - and they imagine search bots do the same so don't want to mess with the compiler. Anyone have experience of this? I've never had whitespace in a H1 before so don't know. Happy to leave the whitespace in if it's not going to be an issue. Thanks in advance

    | FashionLux
    0

  • Over the past 6 months I've noticed an increase of 404 errors being picked up by Google Webmaster Tools. When I look at the errors table and review these errors the links are invariable made up likes being generated from what looks like scraper sites or sites with search results embedded into them.  Below are some examples of them. Broken Link                                           URL Generating the Link www.mysite.com/rwickenhauser      http://www.webstatsdomain.com
    www.mysite.com/guest                       http://www.webstatsdomain.com
    www.mysite.com/windows                 http://www.webstatsdomain.com
    www.mysite.com/..app                        http://213.174.143.40/
    www.mysite.com/ht..                            http://de.aguea.com/ None of these pages exist on our website.
    So do I need to setup redirects for these or can they be ignored?

    | adamlcasey
    0

  • Hey there When we go to our webmaster tools there is a orange triangle. The issue is that Google's robot can not access our site. Does anyone know why this could be? Thanks!

    | Comunicare
    0

  • I have my website ranked on page one probably in top 6, but all of a sudden around 3-4 months before my website just disappeared for that keyword. Any suggestions? Keyword: Brass rod, brass rods webpage url: www.jayjalaramext.com/brass-rods/ and the website itself www.jayjalaramext.com Is it over optimized? or optimized more often which lead to bad history? or any panda or penguing effect ? Waiting for the suggestions.

    | HirenKhambhayta
    0

  • Is there anyway to control the sitelinks under a listing in Google? I have a group of lawyers where 1 of the them is showing up in the sitelinks. They want all of the lawyers to show up. Right now it is showing 1 lawyer, about page, contact us page, etc. Thanks!!!!

    | SixTwoInteractive
    0

  • I bought a domain and it has nice traffic. It only has about 5 main pages in php When i got the site i switched to html because php was overkill. I did the 301 and google deleted the php files and replaced with html version when i check site:domain.com It has been about 7 days. I DID NOT use 301 for each of the 5 pages to go php to html instead is used this code RewriteEngine On
    RewriteCond %{HTTP_HOST} ^mydomain.com
    RewriteRule (.) http://www.mydomain.com/$1 [R=301,L]
    RedirectMatch 301 (.).php$ http://www.mydomain.com$1.html So basically if you load php it will load the html version.  dog.php > dog.html Is this OKAY? or should it be done differently.... worried! Thanks !

    | samerk
    0

  • Buonjourno from Wetherby UK 🙂 Diagnosing duplicate content is a classic SEO skill but I'm curious to know what techniques other people use. Personally i use webmaster tools as illustrated here: http://i216.photobucket.com/albums/cc53/zymurgy_bucket/webmaster-tools-duplicate.jpg but what other techniques are effective? Thanks,
    David

    | Nightwing
    0

  • I'm evaluating moving an established eCommerce I own over to a WordPress based site with a woocommerce plugin. My question is, does the added /category/ slug hurt SEO rankings at all?

    | CobraJones95
    0

  • Last week I was browsing Google's index with "site:www.mydomain.com and wanted to scan over to see what Google had indexed with my site. I came across a URL that was mistakenly indexed. It went something like this www.mydomain.com/link1/link2/link1/link4/link3 I didn't understand why Google had indexed a page like that of mine when the "link" pages were links that were on my main bar which were site wide links. It seemed to be looping infinitely over and over. So I started trying to see how many of these Google had indexed and I came across about 20 pages. I went through the process of removing the URL's in Webmaster Tools, but then I wanted to know why it was happening. I had discovered that I had mistakenly placed some links on my site in my header in such a manner link1 link2 link3 If you know HTML you will realize that by not placing the "/" in the front of the link I was telling that page to add that link in addition to the URL that is was currently on. What this did was create an infinite loop of links which is not good 🙂 Basically when Google went to www.mydomain.com/link1/ it found the other links which then told Google to add that url to the existing URL and then go to that link. Something like: www.mydomain.com/links1/link2/... When you do not add the "/" in front of the directory you are linking too it will do this. The "/" refers to the root so if you place that in front of your directory you are linking too it will always assume that first "/" as the root then the url will follow. So what did I do? Even though I was able to find about 20 URL's using the "site:" search method there had to be more out there. Even though I tried to search I was not able to find anymore, but I was not convinced. The light bulb went on at this point My .htaccess file contained many 301 redirects in my attempt to try and redirect those pages to a real page, they were not really relevant pages to redirect too. So how could I really find out what Google had indexed out there for me since Webmaster Tools only reports the top 1000 links. I decided to kill my htaccess file. Knowing that Google is "forgiving" when major changes to your site happen I knew Google would not simply just kill my site for removing my htaccess file immediately. I waited 3 days then BOOM! Webmaster Tools was reporting to me that it found a ton of 401's on my site. I looked at the Crawl Errors and there they were. All those infinite loop links that I knew had to be more out there, I was able to see. How many were there? Google found in the first crawl over 5,000 of them. OMG! Yeah could you imagine the "Low quality" score I was getting on those pages? By seeing all those links I was able to determine about 4 patterns in the links. For example: www.mydomain.com/link1/link2/ www.mydomain.com/link1/link3/ www.mydomain.com/link1/link4/ www.mydomain.com/link1/link5/ Now my issue was I wanted to keep all the URL's that were pointing to www.mydomain.com/link1 but anything after that I needed gone. I went into my Robots.txt file and added this Disallow: www.mydomain.com/link1/link2/ Disallow: www.mydomain.com/link1/link3/ Disallow: www.mydomain.com/link1/link4/ Disallow: www.mydomain.com/link1/link5/ Now there were many more pages indexed that went deeper into those links but I knew I wanted anything after the 2nd URL gone since it was the start of the loop that I detected. With that I was able to have from what I know at least 5k links if not more. What did I learn from this? Kill your htaccess file for a few days and see what comes back in your reports. You might learn something 🙂 After doing this I simply replaced my htaccess file and I am on my way to removing a ton of "low quality" links I didn't even know I had.

    | cbielich
    0

  • how to redirect the 404 errors to 404.php file

    | learningall
    0

  • I'm having trouble wrapping my head around keyword targeting when two keywords are very similar. Here's my dilemma, lets just I sell ACE brand Widgets I'm doing well with “ACE Widgets” queries but not “ACE” How do I fix this since “ACE” is already all throughout the results page & supposedly anchor text is playing a less and less significant role in link relevancy so just getting links with “ACE” as the anchor text wouldn't really help (I wouldn't think) Just a little confused. Thanks

    | SheffieldMarketing
    0

  • Hi guys, To be honest, it's a little bit embarrassing to throw out this question but it's one of the weakest points of knowledge at the moment for me. I've tried to get a grasp of canonical URLs and what it all means. From my understanding, it's informing Google which page to take into consideration when there's the possibility for duplicate content. Right? However, with the site I'm working on I'm not sure if it would be worth putting site-wide and the impact it would have. Site I'm working on - http://bit.ly/N7eew7 With the nature of the site, there would be a lot of duplicated content as there's the possibility that several properties listed could have a similar address due to being in the same building etc. From what I can see, no canonical URL was setup on the homepage. The other variations of the homepage URL are 301 redirecting to thee http:/www. version. Can someone explain it all to me in simple terms? Honestly believe that I'm getting more confused by the minute. Thanks guys for your patience 🙂

    | MarkScully
    1

  • Hello everyone, I have a doubt in how to approach this problem. I have a local business website, and i want to rank this website for our main KW. So the idea is to rank the main Keyword to the home page www.sitename.com At the same time we blog every week and one of the categories is the same has the main Keyword. It makes sense because the majority of the blog posts are about it. In a certain way the homepage and the category page are competing for the same keyword. How can i approach this problem? Should i use rel canonical in the category page, pointing to the homepage? Thanks for your help.

    | Barbio
    0

  • I'm having some really strange issues with duplicate page titles and I can't seem to figure out what's going on. I just got a new crawl from SEOMOZ and it's showing some duplicate page titles. http://www.example.com/blog/ http://www.example.com/blog/page/2/ http://www.example.com/blog/page/3/ Repeat .............. I have no idea what's going on, how these were duplicated, or how to correct it. Does anyone have a chance to take a look and see if you can figure out what's happening and what I need to do to correct the errors? I'm using Wordpress and all in one SEO plugin. Thanks so much!

    | KLLC
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.