On Tuesday Wil Reynolds published a post on SEOmoz entitled “How Google is Making Liars Out of the Good Guys in SEO“. I enjoyed the read and Wil’s passion behind it (it’s one of the first posts in a while that I’ve read quite in-depth), and I don’t disagree with the principle behind his post, but I think the examples don’t match up and the post doesn’t convey what he set out to convey (sorry Wil). In fact, I disagree that Google is screwing us over and I don’t think their wording is telling us what Wil’s post is telling us it is. If anything, they are lying by omission by not talking about outreach, but we all know that outreach is necessary for content.
Instead, I think Google is dealing with a broken algorithm and are dealing with it in a different way. I do agree, though, that what Google says works isn’t always what works best, but they do not deny that anchor text works. In fact, they know spam is a problem. Why else would they have a spam report? However:
I personally think Google is moving in the right direction and actually dealing with this issue in a way that will fix the problem, not require them to keep fixing a broken algorithmic problem that will always have holes for people to exploit.
I’m going to break this post into a few sections. First, talking about “good SEO”, because I find the discussion worthwhile. Then, let’s talk about deserving to rank, if you should decide it is necessary for you to rank. Finally, I’ll show some examples that will hopefully give us some hope that Google is indeed fighting, and starting to win, this battle. Yes, we’re going to talk about Search Plus Your World (SPYW), which I am increasingly liking.
In case you were wondering, I have deliberately chosen to not respond to Wil’s post point by point. I am trying to get at the heart of the issue of what he was talking about, that Google is not doing anything to help out the good guys. I’m no Google fanboy, but I disagree.
As everyone knows, and a lot of people have already written about, Google recently launched “Search, Plus Your World”. Danny wrote a great piece over at SearchEngineLand and Jon Henshaw from Raven wrote his take on the matter. Both of those are worth reading, as Danny’s gives a lot of insider knowledge, and Jon’s is a great opinion that I think is pretty accurate.
I want to point out an intricacy I have seen that is already bothering me about Search+, as I am going to call it from here on out. I guess someday it will become “Search” once again, but not anytime soon.
One of the factors involved in the Panda update, we think, is the ratio of content to ads on a page. Many people have complained that Google encouraged people to put as many ads as possible on a page only to then slap them hard in the Panda update.
What would you say if I showed you that Google is doing the same thing? I hate when corporations take advantage of situations because they wrote the book on the topic.
I used a tool called MeasureIt to draw the measurement lines to see how wide, in pixels, the text areas are. Surprisingly, Google’s is only about 100px wider than eHow’s.
Here is what I see in Google’s SERPs these days:
Click to Enlarge. 555px wide.
Here is a screenshot of eHow:
Click to Enlarge. Only 463px wide!
How is Google’s search results page any different from eHow’s page, with a lot of ads and a thin line of content?
My take: It’s not. If Google were to be ranking their own search results and rolled out a Panda penalty, they would lose almost all of their search volume. I guarantee it. Who cares about readability and usability? The funny thing is, Google knows that they have positioned themselves so well as the dominant search engine that people will just continue on using them, no matter what.
I admitted back in the day that I don’t really notice PPC ads, but the other day I did as I was clicking through some SERPs. Because I saw 11 ads on one page.
Check out this “Online Colleges” SERP (full length screenshot, so you’ll need to click to enlarge):
Plus, it’s not like Google is hurting for money. After all, in their numbers that they released yesterday, they made 9.72 BILLION. Yes, with a “B”.
Can we make Google change the way they are doing business? No, I don’t think so. I’m sure they’re laughing all the way to the bank (9.72 BILLION ways) and you PPCers are super happy about it as well, because now your job is making your company more money. But us organic folks? Now we have even less to work with. And companies are spending more and more money on paid search.
Duane Forrester from Bing and Matt Cutts from Google, who are the SEOs view into search engines, have given us conflicting canonical advice. Or have they?
Friday morning Duane Forrester from Bing published a blog article entitled Managing Redirects – 301s, 302, and Canonicals. Within this article, Duane explains how Bing sees and may treat different redirects. The part about 301s and 302s is informative and interesting, but the most interesting part of the post comes when he talks about canonicals and says:
Something else you need to keep in mind when using the rel=canonical is that it was never intended to appear across large numbers of pages. We’re already seeing a lot of implementations where the command is being used incorrectly. To be clear, using the rel=canonical doesn’t really hurt you. But, it doesn’t help us trust the signal when you use it incorrectly across thousands of pages, yet correctly across a few others on your website.
A lot of websites have rel=canonicals in place as placeholders within their page code. Its best to leave them blank rather than point them at themselves. Pointing a rel=canonical at the page it is installed in essentially tells us “this page is a copy of itself. Please pass any value from itself to itself.” No need for that.
We do understand that doing work at scale requires some compromises, as it’s not easy to implement anything on a large site page by page. In such cases, leave the rel=canonical blank until needed.
” ”We built in support to make sure that that doesn’t cause any sort of problem. So I can’t speak for other search engines, but it’s definitely a very common case. Imagine if you had to check every single URL and then do a self check to see whether you were on that URL. If you were then you couldn’t have a rel=canonical tag. That would be a lot of work to generate all those tags. So for our part, we said you know what? Go ahead and you can put a rel canonical on every single page on your site if you want to. And then if it points back to itself, that’s no problem at all. We handle that just fine.”
At first the Bing post seems to contradict the Webmaster video, but does it? I actually do not think that the two are offering different advice, but rather that Google is telling us that they have a more sophisticated system for it. Google seems to be making our lives easier as webmasters and SEOs since we do not have to worry if Google treats self-referencing canonicals correctly. Are they necessary/mandatory? No, as Forrester correctly says. Are they a good idea to have? Yes, as Cutts says.
When should I definitely use the canonical tag?
Here are the cases where I think that you should use the canonical tag:
If you use tracking parameters, or may at any point in the future; or
If you enable people to permalink to comments in your blog posts; or
If you might ever link to your site using the non-www version; or
If others may ever link to your site using your non-preferred www version.
The above scenarios are pretty well all-encompassing, so I say to use the canonical tag on every page! Duane does not go so far as to say that using it will hurt your Bing results, just that “it’s not necessary.” Google says they are smart enough to know when it is self referencing. So why worry if someone is going to link to your site using the non-www when you use the www? Use the tag.
When should I definitely NOT use the canonical tag?
A few times exist when you should not use the canonical tag, and instead use a different tactic:
When the page is no longer necessary. Use a 301 redirect to a relevant page instead.
What’s the takeaway?
Bing’s advice in the article is unnecessarily confusing for webmasters and especially for novice SEOs and webmasters. SEO best practices regarding using canonical tags, even self-referential ones, on every page should still be followed, I think. In this case, it’s better safe than sorry. If the search engines tell us that they are smart enough to figure it out, let’s go with that. It’ll save us time in the long run.
Google recently released and announced their “people widget” for Gmail. You can read about it here and here.
*Notice* The two articles are identical, but Google is not using a cross-domain canonical.
Google is creating duplicate content themselves by posting the same article on two different URLs!
Google has released the same blog post on two separate domains. One is http://gmail.blogspot.com, and the other is http://googleenterprise.blogspot.com.
Here are two screenshots, comparing the two. Can you find a difference?
Here are the results when you search “introducing the people widget”:
This would not be an issue with this at all if Google would follow their own advice and use a cross-domain rel=canonical tag, to show which one they want to rank for the search. However, this is what the canonicals say:
The only difference I see is in the Title tags of the pages. This is the only area that may lead one to say that this is not TRULY duplicate content. But is this the case? Or is it actually worse, since the creator is trying to make it seem as if the two are completely unique articles? (And yes, I know that the two reference one another, but is that enough?)
To this I say, “Bad Google, very bad.” Why would you not use your own advice (Google Webmaster Central blog)?
Google even says in that post:
Q: Do the pages have to be identical?
A: No, but they should be similar. Slight differences are fine.
I see slight differences. Why is there not a cross-domain rel=canonical? This instance seems to be allowing duplicate content so that Google has a better chance of ranking for the search query.
By the way, when you search “people search”, the Gmail blog comes up 5th. What does this tell us about Google?
I guess Google really does have a duplicate content issue on their hands…
*edit* I should point out that I asked Matt Cutts today via Twitter if this is considered duplicate content. He has not gotten back to me.
I'm a San Francisco based growth marketer, currently working for Trulia. Formerly head of marketing at HotPads, former head of Distilled NYC, I have also run Destinee Media in Switzerland and worked in-house in online education.
The thoughts on this site are my own and I founded HireGun.co.