Latest posts made by iPullRank
-
RE: Ajax Module Crawability vs. WMT Fetch & Render
Hey Scott,
You're good. If you see it in the fetch and render, you're seeing it as Googlebot sees it.
Google has had capabilities to crawl Ajax content for some time now. And while you don't see the content when you view source, that's not a big a problem. As long as the content is on the page at load Google should not have a problem indexing it. So after load, you should look in the Inspect Element section, that's how Google will see the content.
Where there may be crawlability issues is in content that requires user action to display. Google is inconsistent with how many actions they will attempt in order to index content.
-Mike
posted in Intermediate & Advanced SEO
-
RE: Ecommerce site product reviews, canonicals – which option to choose?
Hey Cyndee,
Your issue has to do with how this is coded. Let me explain.
Here's what your paginated numbers at the bottom look like in the code:
<a title="2" data-bvcfg="3520493" name="BV_TrackingTag_Review_Display_PageNumber_2" data-bvjsref="http://improvements.ugc.bazaarvoice.com/0048-en_us/414441/reviews.djs?format=embeddedhtml&page=2&scrollToTop=true" <strong="">href="javascript://">2</a>
Notice that the "href" parameter of the anchor tag has no direct URL and because of that Google doesn't crawl to the next page in the series because there's no actual link. What would be ideal is if you had the actual URL to the second page so that it is accessible to Google as the href tag. Granted, Google will likely come back to these pages with the more feature-rich crawler and be able to access the content, but that could potentially take a long time or in fact never happen. I believe this is a function of how BazaarVoice operates, although I haven't had enough experience with it to know. A view-all page would help you get around the problem, but again, I'm not sure how that works with regard to BazaarVoice.
You can also use rel-prev and rel-next to connect the pages, but that directive often has spotty results.
-Mike
posted in Technical SEO
-
RE: Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hey Carla,
I'm not entirely sure what you're saying with:
"one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually."
You say that the smartphone bot is seeing tons of 404s and the desktop report is showing tons of 404s, but the smartphone does not. If you can clarify that, I can probably better answer your question.
However, the answer is likely that Google may decide not to crawl URLs that it has already identified as 404s in one context. That is to say if they identify URLs on the mobile device as 404s they will know not to crawl them if they encounter them on desktop and vice versa.
-Mike
posted in Intermediate & Advanced SEO
-
RE: On Page Grader Problem-Sorry But This Page Inaccessible
Hey Alan,
Not sure if your issue was related to the DDoS attack or not, but I was able to replicate it now. Make sure that there is no spacing before the URL when you paste it into the on-page grader.
To track rankings for keywords over time you'll have to set up a Campaign which it sounds like you have. Once you've done that you can find the rankings in the Search section of the dashboard. This is also where you can filter by the label you've tagged your keywords with.
Alternatively you can set up keywords rankings in the rank tracker https://moz.rankious.com/_moz/researchtools/rank-tracker but you won't get the tagging or email report functionality.
-Mike
posted in API
-
RE: Google Processing but Not Indexing XML Sitemap
Hey David,
Quick question: It's been over a month. Are you still having this issue?
From what you've said it sounds like the site verification failure is likely what caused this. However it is worth noting that even if Google ingests a sitemap it doesn't necessarily mean they will index the pages. I would suggest you review your site log files to see if Google is indeed crawling those new URLs, if they are they are likely discarding them because you are fulfilling your crawl allocation with more important pages and these new pages haven't been given any link or social activity.
-Mike
posted in Intermediate & Advanced SEO
-
RE: Optimisimng you tube videos
Hey Hardley,
I concur with Kevin's point below. Giving all of your traffic to YouTube isn't the best idea. Phil Nottingham's research has shown that an extremely low number of users click through to links in YouTube descriptions. Naturally, YouTube is the 2nd biggest search engine on the web and people will find you within it, but you should consider self-hosting your videos so the traffic lands on your website. Wistia is a great solution for that.
All that said, YouTube pages pretty much follow the same rules as any other page on the web with regard to Google. On-page optimization with keyword-relevant copy, titles and metadata and off-page optimization in the form of links. Do those things as you normally would and your videos will rank in Google. The on-page factors and engagement metrics lead to its ranking within YouTube. YouTube also has its own Keyword Research tool so you can identify the terms that need to be placed in your tags, descriptions and titles.
-Mike
posted in Content Development
-
RE: Keyword every blog post?
Hey Chris,
I agree with William. You should always take a peek at the relevant keyword research tools to see what vocabulary people are using to discuss your topic, but don't worry too much about having a post title that mirrors those phrases exactly. Even if the phrase is long tail, Google's keyword tool is just an estimation so there may be more people searching for a given term than you think. Check out UberSuggest and also look at keyword trends in Social Media using Bottlenose's Sonar tool. Those may give you more of an indication of what keywords people are using.
Definitely don't go haywire on shoehorning keywords into titles and headings, but do be sure what keywords that a given post owns so that you can go back through your site and link to those new posts from old posts with the relevant keywords.
-Mike
posted in Keyword Research
-
RE: Effects of pages heavily reliant on CSS for text and image content
Hey Arthur,
Your CSS is not the issue here. None of your content is is hidden or "cloaked." You can see this when you disable CSS and JavaScript with the Web Developer plugin to get a sense of what the text crawler would see. However the issue is definitely that the content is indeed thin and you're not offering anything beyond what can be found in the product specifications sheet on the manufacturers website.
You should absolutely consider some more text-based content. You may consider having user generated content in allowing people to comment on the differences between the two and what they like better. Responses would be inherently keyword-rich and the approach allows you to scale it across numerous pages.
However the UX is very nice. Best of luck and let us know if you might be interested in sharing your progress.
-Mike
posted in Intermediate & Advanced SEO
-
RE: Analytics not tracking traffic from Old Domain Redirect
Is this from all channels? Or are you just specifically looking for the increase in referral traffic? I need a little more information to determine what's going on here. Feel free to direct message me some more details if you're not comfortable sharing them publicly.
-Mike
posted in Reporting & Analytics
-
RE: Transfer Blog From External Domain
Hey Tom,
Can you confirm that there has been no penalty on your blog domain? If not, you should be careful because penalties are known to hop 301s and JohnMu recently revealed that penalties will follow content without a 301.
That said I would prioritize what you redirect based on what has links and shares. There's no reason to set up redirects for page that no one is visiting or linking to. Go with your first option, but 410 them to get them out of the index faster.
-Mike
posted in On-Page Optimization
Best posts made by iPullRank
-
RE: Preserving Social Shares Through URL Changes
Hey Jared,
As you might imagine I have done this. In fact we did it with the iAcquire.com redesign when I was still there. Basically we built something custom to grab the social counts and display them in our custom social share buttons. Our Creative Director wanted to make custom share buttons because they are otherwise different sizes and it looks silly. Then we just set the old URL in one place in the CMS and it pinged SharedCount.com's API to get all the different share counts and display them.
A quick update to my post, from what I understand, social shares are hopping 301s for Google +1s only right now.
As far as your issue with the new shares being attributed to old URLs, that's something I've thought about as well. You could potentially build a similar solution that we had, but allow the new social shares to go with the new URL and check the number of social shares between the new and old URL whenever the page loads. Once the new URL surpasses the old one's share count just skip grabbing the old URL's share counts altogether and use the new URL for both displaying share counts and capturing new shares.
Let me know if that helps. I'm sure we can perfect a solution for you, but it will involve getting a little creative.
-Mike
posted in Technical SEO
-
RE: When should you redirect a domain completely?
You should fill out a Change of Address in Webmaster Tools as well and then confirm that all of the old URLs are out of the index. The redirects however should remain in place as long as you have links pointing from the old site.
posted in Intermediate & Advanced SEO
-
RE: Site Spider/ Crawler/ Scraper Software
Hey Alex,
Screaming Frog is hands down the best desktop crawling software and it has most of what you are looking for.
-Mike
posted in International SEO
-
RE: Seek help correcting large number of 404 errors generated, 95% traffic halt
I agree with Rand's suggestions. I just ran a Screaming Frog crawl of the whole site on 10,233 links, 8997 URLs and got no 404s. So I think it's pretty safe to assume you've fixed the 404 issue. Here's the output of the crawl in case you'd like it for a reference: http://www.sendspace.com/file/7zui0v
I'd say:
- Definitely clean up and resubmit your XML sitemap
- Double check your backlink profile with Open Site Explorer and MajesticSEO to be sure that there aren't sites linking to URLs that no longer exist. If you find any of these make sure to 301 redirect them. Just take all the target URLs and dump them into Screaming Frog in list mode. All the links from OSE point to your homepage so they are not an issue, I don't have access to Majestic right now so I couldn't run those for you.
- You can now Submit pages in Google Webmaster Tools as well in the Fetch as Googlebot section. So you may consider submitting some of the new pages the site generates in addition to your reconsideration request to help get Google to re-crawl and find the 404s are gone.
Good luck man and please let us know if nothing changes after you implement these fixes.
-Mike
posted in Technical SEO
-
RE: Are my new resource sites worthless?
Hey David,
It's hard to say as there has been no clear information from Google on how they use registration data. They are however a registrar and there has been most SEOs will tell you the age of a domain does factor into trust. In the past people have seen links devalued once a WHOIS record went from private to a person's name. In other words what you're suggesting is very possible, but it's hard to say that since you were named the owner once that the domains will no longer be worthwhile.
The real issue here is that what you own is essentially a link/content farm and therefore these sites may have been devalued due to Panda. Ultimately link and content farms are not a sustainable solution and you should think more about content marketing and attracting links via the awesome stuff you would be creating.
I would suggest consolidating into a smaller number of sites, 301 redirecting the link equity and then launching content strategies that align with your business goals.
-Mike
posted in Link Building
-
RE: How to handle 2 domains pointing to the same server.
Hey Darren,
I'm not quite clear what you mean by the same server. If you're saying both domains point to the same content in the same folder in a virtual hosting environment and it's a question of duplicate content I would definitely suggest just 301 redirecting the domain on an .htaccess level and calling it a day.
If what you're saying is that the 2 domains share the same hosting environment but are in separate directories with separate content and you're worried that since they are on the same IP address then yes the problem is slightly more complicated. I would suggest moving the 2nd domain to a different C-block and then 301 redirecting it. Another thing to think about here is...are these domains registered privately or are they registered to the same person? Many SEOs believe this tends to affect rankings.
As far as monitoring in SEOmoz pro I would set them up as competitors to compare linking metrics but as far as crawling give each domain their own campaign.
-Mike
posted in Moz Pro
-
RE: Rankings Tanked After Redesign
Hey Josh,
I'm leery of the "301 redirect add-on" and that would be the first thing that I double-check. I'd pull all of the links that you guys have run all the link targets through a web crawler such as Screaming Frog to make sure that they aren't your 404s. My main concern is that your link equity has not been maintained.
I'd also look at the quality of the links as was mentioned later in this thread, then I'd start looking at whether your content is what we'd consider thin.
There's a lot of moving parts here, but without more information on the site I can't give you a more definitive answer.
posted in Web Design
-
-
RE: Ajax Module Crawability vs. WMT Fetch & Render
Hey Scott,
You're good. If you see it in the fetch and render, you're seeing it as Googlebot sees it.
Google has had capabilities to crawl Ajax content for some time now. And while you don't see the content when you view source, that's not a big a problem. As long as the content is on the page at load Google should not have a problem indexing it. So after load, you should look in the Inspect Element section, that's how Google will see the content.
Where there may be crawlability issues is in content that requires user action to display. Google is inconsistent with how many actions they will attempt in order to index content.
-Mike
posted in Intermediate & Advanced SEO
-
RE: Value of Domain Names
Hey Shara,
If you mean you want to purchase those domain names and 301 redirect them there will only be value in that if they have already accrued link equity. Otherwise if there's nothing there and it is a new domain name it will never be indexed if you are redirecting.
If you mean you want to buy a domain name and build a doorway page to a specific page on your site this has been expressly frowned upon by Google for years.
Either way it's not the worthwhile usage of resources or time unless the domain already has link equity that can be transferred to your site. And you would also have to actually have legit pages on the site to make it worth the effort.
-Mike
posted in On-Page Optimization
I code, I manage, I speak, I travel, I optimize, I rap.