It's 9:30am and you've just started a pitch for a new SEO client. They're the curious type - wanting to know how search engines rank pages, why the changes you'll recommend will make an impact, where you learned to do SEO, and who you can list as good examples of your work. As you dive deeper into the requirements for the project, you arrive at the link building section. The client wants to know why link building matters so much. You pull up a chart of Search Engine Ranking Factors, noting the large role that links play in the ordering algorithms. They're mollified, but have one last question:
How does Google decide how much a particular link helps my rankings?
That's where this blog post comes in handy. Below, you'll find a list of many of the most important factors the engines consider when judging the value of a link.
Before we start, there's one quick concept that's critical to grasp:
As you've likely noticed, search engines have become more and more dependent on metrics about an entire domain, rather than just an individual page. It's why you'll see new pages or those with very few links ranking highly, simply because they're on an important, trusted, well-linked-to domain. In the ranking factors survey, we called this "domain authority" and it accounted for the single largest chunk of the Google algorithm (in the aggregate of the voters' opinions). Domain authority is likely calculated off the domain link graph, which is unique from the web's page-based link graph (upon which Google's original PageRank algorithm is based). In the list below, some metrics influence only one of these, while others can affect both.
#1 - Internal vs. External
When search engines first began valuing links as a way to determine the popularity, importance and relevance of a document, they found the classic citation-based rule that what others say about you is far more important (and trustworthy) than what you say about yourself. Thus, while internal links (links that point from one page on your site to another) do carry some weight; links from external sites matter far more.
This doesn't mean it's not important to have a good internal link structure, or to do all that you can with your internal links (good anchor text, no unnecessary links, etc.), it just means that a site/page's performance is highly dependant on how other sites on the web have cited it.
#2 - Anchor Text
An obvious one for those in the SEO business, anchor text is one of the biggest factors in the rankings equation overall, so it's no surprise it features prominently in the attributes of a link that engines consider.
In our experiments (and from lots of experience), it appears that "exact match" anchor text is more beneficial than simply inclusion of the target keywords in an anchor text phrase. On a personal note, it's my opinion that the engines won't always bias in this fashion; it seems to me that, particularly for generic (non-branded) keyword phrases, this is the cause of a lot of manipulation and abuse in the SERPs.
#3 - PageRank
Whether they call it StaticRank (Microsoft's metric), WebRank (Yahoo!'s), PageRank (Google's) or mozRank (Linkscape's), some form of an iterative, Markov-chain based link analysis algorithm is a part of all the engines' ranking systems. PageRank et al. uses the analogy that links are votes and that those pages which have more votes have more influence with the votes they cast.
The nuances of PageRank are well covered in The Professional's Guide to PageRank Optimization, but, at a minimum, understanding of the general concepts is critical to being an effective SEO:
-
Every URL is assigned a tiny, innate quantity of PageRank
-
If there are "n" links on a page, each link passes that page's PageRank divided by "n" (and thus, the more links, the lower the amount of PageRank each one flows)
-
An iterative calculation that flows juice through the web's entire link graph dozens of times is used to reach the calculations for each URL's ranking score
-
Representations like those shown in Google's toolbar PageRank or SEOmoz's mozRank on a 0-10 scale are logarithmic (thus, a PageRank/mozRank 4 has 8-10X the link importance than a PR/mR 3)
PageRank can be calculated on the page-level link graph, assigning PageRank scores to individual URLs, but it can also apply to the domain-level link graph, which is how metrics like Domain mozRank (DmR) are derived. By counting only links between domains (and, to make a crude analogy, squishing together all of the pages on a site into a single list of all the unique domains that site points to), Domain mozRank (and the search engine equivalents) can be used to determine the importance of an entire site (which is likely to be at least a piece of how overall domain authority is generated).
#4 - TrustRank
The basics of TrustRank are described in this paper from Stanford - Combatting Webspam with TrustRank. The basic tenet of TrustRank is that the web's "good" and "trustworthy" pages tend to be closely linked together, and that spam is much more pervasive outside this "center." Thus, by calculating an iterative, PageRank-like metric that only flows juice from trusted seed sources, a metric like TrustRank can be used to predictively state whether a site/page is likely to be high quality vs. spam.
While the engines don't expose any data points around this particular metric, it's likely that some form of the "distance from trusted seeds" logic is applied by ranking algorithms. Another interesting point on TrustRank - Reverse TrustRank, which measures who links to known spam sites, is likely also part of the engines' metrics set. As with PageRank (above), TrustRank (and Reverse TrustRank) can be calculated on both the page-level and domain-level link graph. Linkscape uses this intuition to build mozTrust (mT) and Domain mozTrust (DmT), though our team feels that we still have a lot of work to do in refining these metrics for the future.
The key takeaways are fairly intuitive - get links from high trust sites and don't link to potential spam.
#5 - Domain Authority
Though the phrase "domain authority" is often discussed in the SEO world, a formal, universal definition doesn't yet exist. Most practitioners use it to describe a combination of popularity, importance and trustworthiness calculated by the search engines and based largely on link data (though some also feel the engines may use the age of the site here as well).
Search engines likely use scores about the "authority" of a domain in counting links, and thus, despite the fuzzy language, it's worth mentioning as a data point. The domains you earn links from are, potentially, just as important (or possibly more important) than the individual metrics of the page passing the link.
#6 - Diversity of Sources
In our analysis of correlation data, no single metric has a more positive a correlation with high rankings than the number of linking root domains. This appears to be both a very hard metric to manipulate for spam (particularly if you need domains of high repute with diverse link profiles of their own) and a metric that indicates true, broad popularity and importance. You can see a list of top pages and top domains on the web ordered by the number of unique root domains with links to them via Linkscape's Top 500.
Although correlation is not causation, the experience of many SEOs along with empirical data suggests that a diversity of domains linking to your site/page has a strong positive effect on rankings. By this logic, it follows that earning a link from a site that's already linked to you in the past is not as valuable as getting a link from an entirely unique domain. This also suggests that, potentially, links from sites and pages who have themselves earned diverse link profiles, may be more trusted and more valuable than those from low diversity sources.
#7 - Uniqueness of Source + Target
The engines have a number of ways to judge and predict ownership and relationships between websites. These can include (but are certainly not limited to):
- A large number of shared, reciprocated links
- Domain registration data
- Shared hosting IP address or IP address C-blocks
- Public acquisition/relationship information
- Publicized marketing agreements that can be machine-read and interpreted
If the engines determine that a pre-existing relationship of some kind could inhibit the "editorial" quality of a link passing between two sites, they may choose to discount or even ignore these. Anecdotal evidence that links shared between "networks" of websites pass little value (particularly the classic SEO strategy of "sitewide" links) is one point many in the organic search field point to on this topic.
#8 - Location on the Page
Microsoft was the first engine to reveal public data about their plans to do "block-level" analysis (in an MS Research piece on VIPS - VIsion-based Page Segmentation).
Since then, many SEOs have reported observing the impact of analysis like this from Google & Yahoo! as well. It appears to us at SEOmoz, for example, that internal links in the footer of web pages may not provide the same beneficial results that those same links will when placed into top/header navigation. Others have reported that one way the engines appear to be fighting pervasive link advertising is by diminishing the value that external links carry from the sidebar or footer of web pages.
SEOs tend to agree on one point - that links from the "content" of a piece is most valuable, both from the value the link passes for rankings and, fortuitously, for click-through traffic as well.
#9 - Topical Relevance
There are numerous ways the engines can run topical analysis to determine whether two pages (or sites) cover similar subject matter. Years ago, Google Labs featured an automatic classification tool that could predict, based on a URL, the category and sub-category for virtually any type of content (from medical to real estate, marketing, sports and dozens more). It's possible that engines may use these automated topical-classification systems to identify "neighbourhoods" around particular topics and count links more or less based on the behaviour they see as accretive to their quality of ranking results.
I personally don't worry too much about topical relevance - if you can get a link from a topic agnostic site (like NYTimes.com) or a very specific blog on a completely unrelated subject (maybe because they happen to like something you published), I'm bullish that these "non-topic-specific" endorsements are likely to still pass positive value. I think it's somewhat more likely that the engines might evaluate potential spam or manipulative links based on these analyses. A site that's never previously linked-to pharmaceutical, gambling or adult topic regions may appear as an outlier on the link graph in potential spam scenarios.
#10 - Content & Context Assessment
Though topical relevance can provide useful information for engines about linking relationships, it's possible that the content and context of a link may be even more useful in determining the value it should pass from the source to the target. In content/context analysis, the engines attempt to discern, in a machine parse-able way, why a link exists on a page.
When links are meant editorially, certain patterns arise. They tend to be embedded in the content, link to relevant sources, use accepted norms for HTML structure, word usage, phrasing, language, etc. Through detailed pattern-matching and, potentially, machine learning on large data sets, the engines may be able to form distinctions about what constitutes a "legitimate" and "editorially-given" link that's intended as an endorsement vs. those that may be placed surreptitiously (through hacking), those that are the result of content licensing (but carry little other weight), those that are pay-for-placement, etc.
#11 - Geographic Location
The geography of a link is highly dependent on the perceived location of its host, but the engines, particularly Google, have been getting increasingly sophisticated about employing data points to pinpoint the location-relevance of a root domain, subdomain or subfolder. These can include:
- The host IP address location
- The country-code TLD extension (.de, .co.uk, etc)
- The language of the content
- Registration with local search systems and/or regional directories
- Association with a physical address
- The geographic location of links to that site/section
Earning links from a page/site targeted to a particular region may help that page (or your entire site) to perform better in that region's searches. Likewise, if your link profile is strongly biased to a particular region, it may be difficult to appear prominently in another, even if other location-identifying data is present (such as hosting IP address, domain extension, etc).
#12 - Use of Rel="Nofollow"
Although in the SEO world it feels like a lifetime ago since nofollow appeared, it's actually only been around since January of 2005, when Google announced it was adopting support for the new HTML tag. Very simply, rel="nofollow", when attached to a link, tells the engines not to ascribe any of the editorial endorsements or "votes" that would boost a page/site's query independent ranking metrics. Today, Linkscape's index notes that approximately 3% of all links on the web are nofollowed, and that of these, more than half are sites using nofollow on internal, rather than external pointing links.
Some question exists in the SEO field as to whether, and how strictly, each individual engine follows this protocol. It's often been purported, for example, that Google may still pass some citation quality through Wikipedia's external links, despite the use of nofollow.
#13 - Link Type
Links can come in a variety of formats. The big three are:
- Straight HTML Text Links
- Image Links
- Javascript Links
Google recently announced that they're not only crawling this third group, but passing link endorsement metrics through them (which has many upset about the reversal in policy about using Javascript as a way to delineate paid/advertising links). For years now, they've also treated the text in an image's alt attribute in a similar fashion to how anchor text is handled in standard text links.
However, not all links are treated equally. In both anecdotal examples and testing, it appears that straight, HTML links with standard anchor text pass the most value, followed by image links with keyword-rich alt text and finally, Javascript links (which still aren't universally followed or considered as an endorsement, at least in our experience). Link builders, content licensers, badge and widget creators and those who enable embeddable content should all, in my opinion, assume the worst about the engines' ability to handle and pass value from non-standard links and aim to get HTML text links with good anchor text as an optimal methodology.
#14 - Other Link Targets on the Source Page
When a page links out externally, both the quantity and targets of the other links that exist on that page may be taken into account by the engines when determining how much link juice should pass.
As we've already mentioned above (in item #3), the "PageRank"-like algorithms from all the engines (and SEOmoz's mozRank) divide the amount of juice passed by any given page by the number of links on that page. In addition to this metric, the engines may also consider the quantity of external domains a page points to as a way to judge the quality and value of those endorsements. If, for example, a page links to only a few external resources on a particular topic, spread out amongst the content, that may be perceived differently than a long list of links pointing to many different external sites. One is not necessarily better or worse than the other, but it's possible the engines may pass greater endorsement through one model than another (and could use a system like this to devalue the links sent from what they perceive to be low-value-add directories).
The engines are also very likely to be looking at who else a linking page endorses. Having a link from a page that also links to low quality pages that may be considered spam is almost certainly less valuable than receiving links from pages that endorse and link out to high quality, reputable domains and URLs.
#15 - Domain, Page & Link-Specific Penalties
As nearly everyone in the SEO business is aware (though those in the tech media may still be a bit behind), search engines apply penalties to sites and pages ranging from the loss of the ability to pass link juice/endorsement all the way up to a full ban from their indices. If a page or site has lost its ability to pass link endorsements, acquiring links from it provides no algorithmic value for search rankings. Be aware that the engines sometimes show penalties publicly (inability to rank for obvious title/URL matches, lowered PageRank scores, etc.) but continue to keep these penalties inconsistent so systemic manipulators can't acquire solid data points about who can gets "hit" vs. not.
#16 - Content/Embed Patterns
As content licensing & distribution, widgets, badges and distributed, embeddable links-in-content become more prevalent across the web, the engines have begun looking for ways to avoid becoming inundated by these tactics. I don't believe that the engines don't want to count the vast majority of links that employ these systems, but they're also wary about over-counting or over-representing sites that simply do a good job getting distribution of a single badge/widget/embed/licensing-deal.
To that end, here at SEOmoz, we think it's likely that content pattern detection and link pattern detection plays a role in how the engines evaluate link diversity and quality. If the search engines see, for example, the same piece of content with the same link across thousands of sites, that may not signal the same level of endorsement that a diversity of unique link types and surrounding content would provide. The "editorial" nature of a highly similar snippet compared to those of clearly unique, self-generated links may be debatable, but from the engines' perspectives, being able to identify and potentially filter links using these attributes is a smart way to future-proof against manipulation.
#17 - Temporal / Historical Data
Timing and data about the appearance of links is the final point on this checklist. As the engines crawl the web and see patterns about how new sites, new pages and old stalwarts earn links, they can use this data to help fight spam, identify authority and relevance and even deliver greater freshness for pages that are rising quickly in link acquisition.
How the engines use these patterns of link attraction is up for debate and speculation, but the data is almost certainly being consumed, processed and exploited to help ranking algorithms do a better job of surfacing the best possible results (and reducing the abilities of spam - especially large link purchases or exploits - to have an impact on the rankings).
While the list above includes many data points, it's almost certainly not comprehensive. Please feel free to suggest others that belong here in the comments below.
Just a small correction to #12 ... rel=NOFOLLOW is not really a new HTML-tag but rather a new attribute value. rel= is not new - only the NOFOLLOW value :)
Absolutely. I know how annoying it is to constantly correct / be corrected on precise terminology, but I think it's equally important that in an industry which is so segmented between SEOs, developers, socials, marketers, designers etc, that we're as absolutely clear and consistant with our use of terminology as possible in order to minimise the already extensive amounts of confusion that exist in communicating cross-department/expertise...
I'd really like to know how long #11 (Geographic location) based on the server's IP address will be important for search rankings. As more and more webapplications move into "the cloud" and thus getting IPs from ranges of big cloud providers such as amazon, which are mainly hosted in the US or UK, I don't think that GYM will put much weight on it in the future.
Furthermore the trend seems to go into mirroring the content to various locations spread around the globe (such as amazon's cloud front does) which then serve the content based on the vicinity of the client's IPs, resulting in faster response times and a faster web experience for users ... and that's one of google's current incentives isn't it (e.g. PageSpeed)?
By the way I posted a question to Matt Cutts on the google moderator page about exactly that IP address / cloud problem ... but didn't get an answer yet. If you're interested, maybe it'll help if you vote it up :-)
the results from different countries vary a lot, it will put more weight on links from that country. It uses the servers IP address as clue for it's target audience. It also uses the domain name extension, you can also set in google webmaster tools saying your location. So even if your in the cloud you can still give the clues to where you are. If I was targeting the US and only shipped there, then should I rank highly for say australia if it had no links in the australian context.
Good example of domain authority "perhaps" in action:
A while ago I started the process of creating some static HTML landing pages to target broad search terms such as "connectors" and "capacitors".
These aren't amazing pages, and at the time had zero backlinks with the exception of 1 anchor text link from a country selection flags page at www.farnell.com.
These pages ended up being indexed and ranked 1st page on the majority of broad terms within about 30 minutes each. They still have non-existent link portfolios in their own right, yet they are in very good ranking spots on very competitive terms.
Make your own mind up, but personally I fully agree with the concept of domain authority, and don't need a "technical" definition to support my thoughts.
Examples:
Search Term - Connectors
www.farnell.com/uk/connectors
Search Term - Capacitors
www.farnell.com/uk/capacitors
Thanks for the example :) I love evidence. It would be really neat if you could throw together a post (YouMoz, or elsewhere) laying this out as a case study, including analytics data if you've got it :)
I know, I know, I ask a lot.
Yep, I'll try and do this today!
Cheers
Ben
I have bunged this up into YouMoz just now. Unfortunately I can't give away much around the analytics because I have to protect my organisations data, but I have been able to show clear examples of how it was set-up and how it has resulted in significant rank increases from no where!
Hopefully it hits the nail on the head.
Cheers,
Ben
Great resource for training link builders.
Props to whatever/whoever you use for creating graphics. Illustrating the point really helps when explaining link value concepts.
Even though this list is extremely through, there seems to be no mention of the RankRank® metric that was at first a cornerstone of the Webfluence linking formula and is rumored to have recently made it's way into the Googoritm;-)
-----------------------
On a more serious note there are a few other coding issues and server directives that can affect link juice or “link visibility”:
-----------------------
Also, really great post!
Even today the standard is the same as you have explained init.
I have at least 10 more for you, but I'm not posting them here as I don't want to get MM's machine gun aimed in my direction. :) Seriously, I think it's vital to remember that these 17 (or my 27) are nothing but educated conjecture. I make a living via educated/experienced based conjecture turned into link building strategy and execution. I tell clients I never rely only on "data" when making link building strategy choices, because any data will be based on nothing more than a "hunch". Still, this yet another outstanding post that shows just how complex a link building strategy can be. Thanks Rand!
Eric Wad
Great post. I would be curious to know if a link to a page was incorrect but still worked (using for instance upper rather than lowercase) if the engines would frown upon it somewhat?
Oh... and with HTML5 links in content will be valued far more, because by then the spider can understand much better what the main content is and what is not.
Ever since Matt Cutts comment on 'editorial' type links I have been carrying out various tests trying to establish the exact effect of naturally given links built into content, for example a blog post.
The results so far have been very positive with some pages moving up 2 pages on Google by simply applying some anchor targeted links in the content of a blog post.
A fairly new link building tactic is offering sites unique content for an anchored link back to your site, this might be a great idea if Google are going to give so much weight to editorial links.
Great article, covers all the bases and a little more.
really interesting stuff, I have a couple of days off and have been digging back though a couple these posts. I find that increasing trustrank is probably the most difficult thing to do with new domains.
That's quite likely the case. One thing PRO members can do is go to www.seomoz.org/labs/link-intersect and plug in your site, plus three or four sites that are already ranking reasonably well (ranks you might be able to attain). This will give you a lot of sites that could be viable link acquisitions for you. If you need trusted links, look for the sites with high mozTrust.
Randy, #8 is new for me as a normal blogger yet I am following with my own SEO for my blog + website (s)! A handy list too is presentend here for sure!
Rand - Thanks for yet another excellent article. There may be debates in the comments section about the validity of a few metrics but overall an interesting read.
Despite the slight rant (understandable considering that however interesting the seo factors document is, it isn't fact) there are always things to remember when Michael posts
"And yet we see Wikipedia all over the search results. Instead of crediting Wikipedia's internal linking structure and extensive on-page repetition of keywords (as well as use of keywords in titles and URLs), many people in the SEO community simply conclude that "Wikipedia is an authority domain" and that means it cannot be beaten in the search results. "
Guys I have one question that keeps stricking me every time I talk with people about PR referred to in #3
Does a pagina lose PR if they link out to other sources? Cause if something flows, something is going out/away.
Or is the page keeping it's strength? and only the specific links have a dampening-factor?
There's a lot of buzz about whether or not it is a bad idea to post external links on your website, Wedding. My feeling is that linking to certain external sites with authoritative qualities [i.e. a .gov website or a .edu that is topically relevant to my website] may actually give MY site a boost in PR, mozTrust, etc. Of course, I go back and forth on this issue, but for the most part, I no longer see a need to nofollow all external links - there's value in building a relationship with authoritative sites in my niche.
I agree with nichenet. Linking out to other authoratative (and relevant) sites reflects positively on your own site.
I also believe that external linking shows the search engines that you are attempting to provide readers with valuable and relevant information. At the end of the day, that is what search engines are attempting to do: provide users with the most relevant search results based on a searcher's query.
Ye, thats what I think aswel.
But my question was:
If a page has for example a PR 2 with no links on it.
The moment it links out to 1 source it loses some of that PR and passes it through the link? e.g. PR goes down by 15% = new PR 1.7
This would mean that you could lose your PR2 if you place more then 6 external links on that page.. New PR becomes 0,75 (visual PR =1)
I'd go back and check out how PageRank flow works a bit. You can't "lose" PageRank on an individual page by linking out (at least, not in the direct way you're describing). Think of a page as having an innate importance, ascribed to it by the links that point there AND ALSO having the ability to pass PageRank when it links out in proportion to that importance. So, in an imaginary (and somewhat rough scenario), a page has a PR of 4 with two links on it, and each of those passes PR4/2 * 0.85. That doesn't mean the PR4 page drops in importance, however.
Good blog post on this here - https://www.seomoz.org/blog/how-pagerank-works-why-the-original-pr-formula-may-be-flawed - from my grandfather!
The way I understand it is every page, once metrics are assigned, has a % of its real page rank that is can spend in whatever way it wants. So if a site wants to link out to trusted sites in their industry and use this page rank in that way then it can. Or if they choose to spend that juice internally to other relevant pages then they can do that as well. There are benefits to both. The only thing I would be careful of is linking to competitors with good anchor text as you don’t want to help them gain relevancy for competitive keywords.
As SEO's I believe we often overthink everything. If it makes sense to link to another source (one that you trust isn't spam) then link to it.
Haha, that's a great - both feet on the ground - perspective!
But I guess it's true: There should me more attention to the natural, organic characteristics of the web...
Wow, Great content. I will put them to practice. You should write a book.
I just noticed that I realy have to much footer links on exernal sites. Thank you for the reminder. Sometimes it has to be repeated to me so I finaly do something about it.
It is realy cool of you that you put up the same information in different contexts.
This great post reminds us all of the 80-20 rule. Watch the Matt Cutts video from WordCamp. He says that Wordpress provides 70-80% of what you need for SEO. Maybe true, but in a non-linear way. The last 20% is where the men are separated from the boys (or women from the girls). And this in depth blog post illustrates the work and understanding that goes into that last 20%.
I laugh when I read that the SEO business is done. Its all automated now. Nope. The issues around link building and the ongoing algorithm modification to make relevancy more accurate will make good SEO people even more valuable.
I learn every single day from posts just like this. It supports my blog post SEO Is Never Done.
#8 - Location on the PageI think one of the commonly misvalued strategies for onpage SEO. Footer links = fail. Onbody links = win.
it is very useful.
mark, a fresh to seomoz.Thank you for your share
SEO is common sense and this article made perfect sense. Well done Rand.
Great information rand. This gives me enough ammunition to create & execute link building campaigns for most my clients.
Cheers,
Zuheb
I've not been to SEOMOZ since the middle of last year (seo is just one part of running our business for me) but I just found a link to this great post, I suppose I have a fair bit of reding to do now, to catch up on the last 6 months though!!
I can't make it through this whole post at one time. I learned something with most of these steps,
especially number 8.
Finally an artical on links which makes sense.... However still can not get my head round the no follow... either way another interesting and relevant article. Good job!
This article got it right, and does indeed cover everything we learned at my workplace during training. I'd like to help clarify the 'nofollow' for you cocoonfx if you'll allow it though.
Let's say you're a highly valuable, known and trustworthy site, and you are writing a blog post in which you'd like to refer to an article you read and did not find very useful, for example. You link to this article. Without the rel=''nofollow'', the default being 'dofollow,' a piece of your 'pie,' your trustworthiness, your page ranking, your status basically, will be transfered over to the page you linked to, because it's like you're the cool kid in high school and you tell everyone publicly that you're friends with the lesser-known kid. Suddenly this kid gets attention.
By using the nofollow, you're linking to the blog post but telling search engines that it's just a link, not a 'friendship declaration;' you're not sharing your glory and fortune with the linked page, you're just citing it for other purposes. Kind of like the cool kid at school again mentioning the lesser-known kid while telling her/his friends about a class and the people who were in it. It's just the facts.
I hope it all makes sense now. If not, maybe this article by Purposive about linking will help; it puts things in simpler terms.
Very informative writing about the value of Links. Thanks a lot!
Hi Randfish,
I am very new here, so I am sorry if I am asking you a very basic question. I am doing linkbuilding for a company and I realised that many competitors simply leave a comment on people's blog with an anchor text to their websites. Is it effective or just a waste of time?
BTW, I wanted to give you more thumbs up but I don't knw how :)
Thanks,
Paula
@pvalbocino Thanks for asking. For the most part just running all over the web and posting comments on blogs then leaving your link text is a HUGE waste of time.
Most comments on most blogs are "nofollow" links. That means that if your job is making links for your company, those links don't count.
To build real value and get long term results for your company, you need to focus on other areas to get links.
If you hang around here (SEOmoz) and read a couple posts (even older ones) you will get plenty of guidance on where to start. You will be doing your employer a great and valuable service. Their business will grow, you'll get a raise, buy a nice car, move to a bigger home, get married, raise a family, get into a higher tax bracket, and maybe someday you will be reading a question on SEOmoz from a new person asking what they should do. Then you'll answer that question and this all starts over. Pretty cool.
Here Here!
great post rand, very thorough :)
some Q&A(hopefully "A" :P ):if you were to devise an experimet that measures google's/yahoo's/bing's ability to crawl JS links, how would you go about it?
further more, about Wikipedia links that are no-followed but still considered to pass some juice...
when i think about it in terms of social media links - that are no-followed/JS/bit.ly etc. do you think that in the near or far future, those links from high trust social media pages will pass link juice?
All posts here are useful some or the other way .. Thanks...
Great stuff thanks
Rand, I must say that I'm starting to become a true fan of yours. Although we're somewhat aiming for the same business objective (albeit in a different geographic location, at least for now), I've learned a lot from you thus far. Both on SEO-specific topics, as on English (my native tongue is Dutch, maybe you can learn something on that from me? ;)), and on blogpost writing in general as well.
This post of yours is another example of your blogging skills. It's tremendous in my eyes, honestly...
This is a great in-depth discussion, and I'm glad to see people challenging the "conventional wisdom" about search. Fact is, until Google agrees to enter the conversation, much of our discussion is pure conjecture. It bothers me that Google refuses to discuss their search strategy and methods at all, since Google is a great driving force for much business information exchange. If we require transparency in corporations managing money, healthcare etc., why not information? Is it any less critical to our economy and security. I'm not advocating that the government regulate search 0 that would truly suck. I'm saying "c'mon Google, wake up and join the social media world. Let's talk about what you're doing and why."
Noble objective of yours, but we cannot neglect the fact that Google is in a field of business where harsh competition is present. If Big G would become truly open about their search / ranking methods, they'd put their entire corporate value at stake. And for what? In order for the (some areas of the) SEO industry to manipulate their SERPs?
The only way for Google to become transparent on search ranking factors, is to losing the majority of their current market share in a way they cannot overcome without becoming 'open source'.
Plus the fun on SEO-ing would fade away, and would that be something worth pursuing?:P
Rand, as always, great stuff.
I seriously doubt that without actually getting Search Engine Employees to give actual proof of concept inside information there is going to be a better way to explain this to a client.
What is astonishing (to me at least) is that this is only one area of the determined merit of a website for search results purposes. Although extremely important, it's only one part of the puzzle.
Thanks for the awesome work.
Isn't #4 & #5 the same thing? Domain Authority and Trust. This ones gonna need a few re-reads to get the most out of it.
Hey Rand. Another crystal clear post that's getting printed out.
Honestly, between products like Linkscape and articles like this, you should be registering the name LINKmoz.org.
The information that's available here at SEOmoz has been getting better and better, and I find myself marking many of the other RSS feeds I used to read regularly as read without actually reading them. I'm finding more and more substance from SEOmoz, and more and more fluff elsewhere.
I'm really gonna have to break down one of these days and reclaim my pro membership...
Absolutely one of the greatest SEO posts Ive ever read! Thank you for this outstanding information, I feel like I just learned about 5 new things that I had never thought of before! You guys are like the kings of SEO!
Exatch match is the way to go most of the times.
Google loves exact match domains, and they also love exact match anchor text.
I find exact match domaining to be less effective that it was, say 3 years ago.
In fact, I had an exact match domain for my main SEO site until June, at which point I moved to a more brandable "half-match" domain. I'm doing better on local rankings since removing half of the exact-match equation.
Even if exact matches did provide the same edge rank-wise as they did a few years ago, I'd still opt for an extra layer of brandability over a couple of spots on the SERPs.
I have used and still use exact match for highly competitive commercial terms. They still rank on page 1 with little work on and off page.
They're also a 2nd part of getting another domain on page 1 of Google. Great 2nd strategy that helps out in a lot of ways.
Hey Rand,
Great roundup.
Just to add a bit more to the point you made at the end of the #12 point, it is not true that Google will not pass any link metrics through a nofollowed link.
In an experiment I performed with two links pointing to the same target from the same page, the first link passed anchor text even when nofollowed.
I wrote more about it here: https://www.seo-scientist.com/first-link-counted-rebunked.html
Cheers
Branko
rel="nofollow" links show up in the back link information in GWT, and Googlebot does appear to follow this type of link to discover new information.
To my mind, nofollow is a bit of a misnomer, perhaps rel="nocount" or rel="novalue" would have been more accurate names.
Great idea, James. I think I like "nocount" better since there could still be value in the link. Google could still be looking at the anchor text, which would provide value, or trust value might still be passed.
I believe link metrics are passed against rel=nofollow just t a far reduced level.
I still think the idea of a site linking to another site and then saying "but we can't vouch for its quality" is a bloody rediculous situation and an example of SEO's damaging search for everyone.
Not all SEO's of course, but probably the vast majority these days.
Great article. It covers pretty much all aspects of links in SEO.
That's a great long post Rand. Well done for this one.
I agree with you about the anchior text policy. Search engines could change the way they look at it. I think Exact Match anchor text does not look natural.
If someone had asked me how many different ways there was for a search engine to look at a link I'm pretty sure I would have given a number way lower than 17!
Although I guess it all depends how you break it down.
Really useful break down though :) thumbs ^
What a fabulous and educational post about link building! I've just included it in my favs.
Thanks so much for sharing this great tips.
Vicky
Great information. We have been working with exact match as well while changing it up once in a while to "look" natural butwondering if we are diluting the overall link profile of the page.
Another Great post, Very informative!
Great post, Rand! a solid summary of link value factors that I tend to agree with, based on my own experience.
Great post, thanks for the refresher. fyi comments has reached 14 pages :o
wow, I just had a feeling that something big will come up today.
Great post, rand, it's really really comprehensive and pretty much covered everything about link value
totally awesome article and lots of useful information. thanks for the post
My internet induced ADD didn't allow me to focus properly on this post - it's long. I'll need a few rereads to make this all stick. I've got most of this down, but 17 is an impressive amount of link factors.
Thanks, Rand.
Great overview post, Rand. I believe you've touched on just about everything there is to say about links. Personally, I am definitely seeing good results when link building and targeting many different domains rather than links from the same sites. Diversity is a good thing.
Not sure if you technically mentioned it above, but links in the middle of a sentence seem to be given more weight...rather than some anchor text link in a sidebar. Definitely worthy to note.
I also have seen evidence that links from pages that have a lot of traffic tend to help, as well, even if they're nofollow links.
Thanks Rand,
More tools for my client explanation arsenal.
Keep 'em comming.
I hate to leave a fan-boy comment, but this is really a great resource. It's clear, both from clients and the Q&A here on the site, that many people still have a very narrow view of link-building, usually focusing on quantity over quality (or, at best, one very small aspect of quality). The algos are becoming more complex every day, and a one-trick-pony approach to SEO just doesn't cut it anymore.
Sweet write up Rand! (How original, I know...)
Thorough resource pages like this one are so handy in catching some of the obvious-yet-easy-to-forget optimization factors.
I wrote a page on measuring quality backlinks last month, in which I added Mozrank and Trustrank to the list of important factors determining link quality. Seems I forgot to mention 'geographic location', 'link types', and 'other link targets', though :o
Thanks!
Very interesting topic Rand Fishkin. This blog post will be sent to all our SEOs and memorized.