This week, as Scott heads off on vacation for Thanksgiving, I'm posting our latest Whiteboard Friday on the concept of links as votes of importance from the search engines' perspective and how link juice passes. Below the video itself, I've created a few helpful graphics to better illustrate the phenomenon I'm discussing:
There are two big topics from the video that would benefit from additional explanation, and I think visual representation is probably the way to go (particularly since these are supposed to be for our more visual learners in the blog audience):
How Links Pass Importance from one Page to Another:
How Advanced SEOs Can Control the Flow of Link Juice:
A word of warning - I don't say "advanced" lightly. We've had plenty of experiences where implementing what we thought was a smart nofollow strategy to control link juice has either backfired and cost us traffic or had little to no visible impact. The best way to implement strategies that rely on link flow control is, in my opinion, to start small, test, then refine and push out to the site as a whole. It's most effective in our experience on large domains with tens of thousands to millions of pages and lots of pages in "supplemental." When used properly, link flow can help to get these into the main index.
Also - since raw link juice (aka global link popularity, aka PageRank) is one of several hundred factors in the algorithms at the major engines, don't be surprised if this tactic has little impact on competitive rankings. We find it to be much more valuable and effective in pushing up the visibility of very long tail material.
Hope everyone has a great Thanksgiving! We will most likely not have a blog post up between now and Cyber Monday, so please enjoy a few days off (or, you know, go check out all the cool stuff inside premium you've been neglecting to read).
p.s. Note that the images I've created are not to scale and don't correspond to any given percentage or amount of link juice lost or passed. They're only meant to be representative of the basic link flow concepts.
UPDATE FROM RAND: A bit of my logic in the images is in dispute in the comments, and we're asking Si to take a look at some math to help us figure it out. Basically, the strategy of sculpting link juice is still sound, but the idea of a "leak" of juice through adding additional links to a page may not be accurate (at least, according to the original Google PR formula). Many thanks to Hamlet Batista for bringing this up :)
I'm sorry Rand, but I see a fundamental problem with a critical part of this explanation.
There is no such thing as PageRank/Link Juice leaking.
The PageRank research paper (see sections 2.2,2.3 and 2.4) defines the PageRank of a page as the sum of the PageRanks of all pages pointing to it (tempered by the number of links each linking page has).
In other words, only inbound links can affect the PageRank of a page, not the outgoing ones. For example, Yahoo can link to as many pages they want from their home page and that will not affect their PageRank (we are not considering potential penalties obviously). The more pages they link to the less importance/link juice they will pass to each one of them.
This is a very interesting topic and I will set some time apart to write a detailed Youmoz post.
For those of you that want to learn more about PageRank, I recommend: Google's PageRank and Beyond - The science of search engine rankings by Amy N. Langville and Carl D. Meyer
Cheers
Hamlet that isn't quite correct, or you wouldn't be able to create a sacrificial linking structure.
If you point more juice to internal pages, then due to the cyclic nature of PageRank calculations it is possible to reduce the overall pagerank of a Page, and indeed a whole site by having lots of external links.
However there is a balancing act, because if you link out generously as a strategy, you tend to also get more links in return, and whilst from a blog you might send traffic elsewhere, much of that returns, and the new links bring in traffic.
Andy, this requires a longer explanation; but let me tell you why "sacrificial linking structures" are possible.
When a page decides to link to fewer pages its links/votes carry more weight and hence the pages that it is linking to receive more PageRank/link juice.
When we talk about PageRank, we are talking about the estabilized values, not the transient values that happen during the iterations of the Power Method.
I'm not saying that is not possible for a page to lose PageRank indirectly by linking out, but I'd need to see a mathematical exercise that proves it.
Well as an example lets take my recent PageRank "hit" coverage
I have about 400 links coming in to just one of those articles
There are lots of links leaving the page to external sources
1. In the article there must be 20 or so links
2. 142 comments that are dofollow
3. 199 Trackbacks that are dofollow
Now if I didn't have extensive internal linking on that page, it would be leaking like a seive and none of the juice would be flowing to my internal pages.
Those internal pages however ultimately will also pass some of their juice back to my PageRank "hit" page.
If they don't receive the juice, they can't give it back.
In Leslie Rhode's dynamic linking ebook there are some structures that demonstrate this, and there are sacrificial structures that demonstrate how to maximise the juice being passed to a primary site.
Here is an example of a Spider Circle as I mentioned in a previous comment.
Now I do something crazy, stick 4 external links on each of the content pages without adding more internal linking to counteract it.
Even if I stick 4 inbound links to each of those content pages to try to counteract the loss for the external link, overall the site has less juice.
Obviously this isn't real Pagerank we are talking about, just relative values, but even so placing external links on those pages has a detrimental effect on the PageRank of the pages themselves.
Oh, and I will be doing some simple site restructuring to benefit from all those juicy links a little more, and at the same time improve user experience.
Andy - I will do a math excesice with this example you are mentioning. Unfortunately, this is something that is better to explain and understand with numbers.
Cheers
PS: Thanks to the recursive summation equation, every link on the web affects all the pages on the web indirectly; but we are implying here that outbound links affect the PageRank of the linking page directly ;-) That simply is not the case
Because every page initially starts out with a minimum PageRank value, what matters is the final PageRank value after PageRank iterations are done, and the final PageRank values are affected by outbound links.
You cannot think of a page having TBPR 5, and then adding outbound links to that and claiming those links don't leak PageRank. PageRank calculations aren't done after PageRank values have been arrived at.
That is correct.
Halfdeck. All the public PageRank formulas I've seen define PageRank in terms of the inbound links. I am not sure how a page's outbound links can have any direct effect if they are not included in the equation. Perphaps you can point me to some recent research paper that I am missing.
"I've seen define PageRank in terms of the inbound links."
Right. Like you said, its difficult to explain, and I know what you're saying: To paraphrase your point: PageRank is calculated in part by using the sum of the PageRanks flowing into a URL. Adding 1,000 links to that URL only influences how much PageRank passes through each of those links. Outbounds don't directly impact the PageRank of the linking page itself.
Basically, if page A links to page B, page A isn't giving up its PageRank by linking to page B, because page A's PageRank is defined by inbounds, not forward links.
That point becomes moot the moment recursion/iteration enters the picture.
Initially, the sum of the PageRanks pointing at URL X is unknown, because those PageRanks haven't been fully calculated yet. So how are those values calculated? By iterating through. If you iterate, then links on a page (outbound/internal) must enter the equation.
You call that an "indirect" effect, but like I said, page A's PageRank is defined by its backlinks and those backlinks' PageRanks are defined in part by page A's forward links.
Halfdeck - We are on the same page, but the recursion affects all the pages in the index not just the linking page. This will be clearer in the example I am producing for the post.
That is the reason why I was careful to say that the oulinking has an indirect effect. Depending on how the links are structured this effect might be minimal.
The problem here is that we are implying that the effect is direct. I wouldn't be surprised if some readers start no-following their outbound links to retain their PageRank. Again, there is no direct effect because the PageRank score of a page is defined in terms of its inbound links.
"the PageRank score of a page is defined in terms of its inbound links."
And the PageRanks of those inbound links may be defined in part by a page's forward links.
Yes, I think a visual representation would be easier to digest :)
I personally think the term "PageRank leak/bleed" is inaccurate. But I think it is true that by linking to other sites, from purely a technical standpoint, your domain as a whole loses PageRank, because you are choosing to flow PageRank to external pages instead of to internal pages.
There is actually an annoying bug in the form that sometimes it doesn't capture the recalculated values. This link should hopefully show what happens in the 3rd instance where you try to compensate the leaks.
There is a fundamental concept in leaks.
All pages are web documents, and for PageRank it doesn't matter about domains as far as published material allows us to determine.
If you believe that you can control the flow of juice around a site to place emphasis to a particular landing page, then it is equally possible to do this for a document hosted on another domain in a sacrificial manner.
By doing so, after multiple iterations of the formula, there is a detrimantal effect for outbound links on the domain as a whole.
One of the reasons that SEOmoz has seen very little benefit from their own changes in internal linking is that by adding nofollows to lots of internal links, and even the removal of some links, more emphasis is placed on external links on each page.
The same can be said about internal or external links. Public PageRank formulas make no distinction about internal or external links.
Andy, I feel you are diverting from the original critique to the post by mentioning examples that might indirectly affect the PageRank of the linking page.
If the outbound links link back to the linking page directly or indirectly they will obviously affect the PageRank of the page; but I will provide example link structures where they don't affect the linking page and hence don't affect the PageRank. You can not make generalizations out of specific examples.
Andy, I love your take on this.
Give and get is really heap!
Hamlet - just want to clarify.
You're suggesting that once a page has accumulated a certain amount of link juice, that juice cannot be reduced by the quantity of links on that page. For example, if page "A" has a link juice quantity of 10 with no links on the page, it would still have a link juice quantity of 10 if it added 2 links or 52 links?
Just one more point - even if this is the case (or if different search engines use link juice differently and some follow this pattern while others use a "leaky" model), the logic for using "nofollow" and controlling link juice patterns still holds. So long as the amount of juice passed is dependent on the number of links (which I think we agree on), sculpting the flow of juice by limiting the number of links to the places that need it most is still a solid strategy.
That is correct. I will write a detailed post with the Math exercise to prove it.
We are on the same page here. That is a strategy that makes perfect sense, but it is not necessarily practical in every scenario. For example if the page has low PageRank values, there is not much it can give to other pages on the site.
Hamlet - very interested to see the math you use, and I'd love to hear your opinion on whether modified/updated versions of the original PageRank formula gives any different results :)
Already started working on it. ;-)
As you know, the exact formulas Google is using are not public, but the papers that discuss improvements to the original PageRank model focus primarily on reducing the convergence time (so that they can compute the PageRank much faster), personalization, sensitivity issues, etc.
Rand, I just want to hit on this a little. Getting inlinks and controling the juice with rel="nofollow" or robots txt disallow is all good and dendy, if a site has existing authority, if not it would be a waste of time, and if one is to SpeedLink and rocket out, will land in quick Sand...
I have to agree with Hamlet.
From my understanding of the math, PR cannot be "leaked". The amount of PR able to be distributed is directly related to the amount of PR it has but the amount of PR a page retains has no relation on the amount it distributes.
As rand pointed out below, that has no effect on using nofollow to manipulate PR flow. It just means a page has a pie of PR and you can make the pieces you give out smaller or bigger by using nofollow.
Jeremy, I went through this discussion about 6 months ago and many SEOs would swear that it does not leak, but after I deindexed my supplementary pages my site stabelized from up and down on a Google elevator.
I think I even read Adam Lasnik hint that there is Google PR leak, in some interview.
Jeremy - I like your pie analogy.
Instead of blocking PageRank to a page completely, you can also reduce the amount of link juice flowing into a page by adding nofollow to "some" links pointing to a URL.
If you have any sitewide links, no matter how much you nofollow links to "unimportant" pages, PageRank will eventually gravitate toward those pages. In that scenario, many third-tier pages may still not have enough PageRank to make it out of the supplemental index.
Remember, using robots.txt and META robots to block "unimportant" pages will not work because robots.txt blocked pages still accumulate PageRank. The only reliable way to control PageRank internally is by using nofollow.
If you do start adding nofollow to internal pages, you need to make sure you have enough followed internal links on every page. Otherwise, the PageRank you think you gained will just escape through outbound links (because now outbound/internal link ratio is greater).
If you don't have a lot of backlinks (e.g. your highest TBPR page is a TBPR 2), this tactic won't give you alot of mileage because to sculpt PageRank, you need at least a moderate amount of PageRank to play with.
And a shout out to your Supplemental Results Tool as it has helped me peak under the hood of a sputtering site more than once.
One question that came up using it the other day though, is there any way to export the data into Excel? Oddly I was able to copy and paste the results once, but I haven't been able to duplicate this recently.
Chuckallied, thanks for the shout out :)
"is there any way to export the data into Excel?"
Nope. At the time I coded that, I wasn't in the habit of using CSVs. It takes maybe 10 minutes for me to add that functionality, but unfortunately, these days I'm literally swamped with work and catching up on Dancing with the Stars on abc.com. I keep promising people I'll do some stuff over the weekend but by the time Friday hits I just want to sit back and relax..
If I do update that tool, I'll post something on my blog.
Haha. I know how that is. What's weird is that some how I was able to copy and paste it once into an Excel document. It gave me the URL, HTTP header status, numbers for PR, and all sorts of other assorted things. I'll try seeing if I can reliably reproduce my hack, as that would save coding it in. Maybe this is something the community can take up as well.
Ah ha! Figured it out. The time it worked when I tried it, it was with a smaller site, on larger sites the Windows Clipboard can't handle copying and pasting all the data. So here's what you do:
Select a partial number of lines. (1000? Dunno the limits of clipboard memory)
Press Ctrl+C
Paste into notepad with Ctrl+V
Select from Notepad, copy, and paste into Excel.
Repeat.
Like I said, with a small site you can do this all in one copy/paste step, but with a larger site you need to space it out. There's probably an easier way to do this (some clipboard program or bumping up the default memory for Ctrl C), but the good news is it works, and you can save the results into a spreadsheet!
Halfdeck, I am sory Dude, I reallly have to disagree with your statement that you cannot stop leakage of PR bu disallowing a page in robots text or meta tag noindex.
I had a link resource page with GTPR 3 now it is GTPR 0.
If a page has GTPR 0 now how can PR be leaking to it? If PR was leaking from the top to that page, then that page would not go from PR 3 to 0.
Unless you can explain to us the reasoning of a Must use rel="nofollow" to block PR leak to your internel page, I see no need for using it in the situation where you can block the pages with disallow in robots text.
rel="nofollow" = do not index. meta noindex = do not index disallow = do not index
In a case you have many pages that you want to disallow like affiliate content or wordpress blog real URLs not the virtual rewrites,
So with Wordpress query string p=post_number how can you rel="nofollow" it? and the virtual url this-is-my-post.htm follow it?
You need to use robots.txt or meta noindex in your PHP script.
So unless you can reasonably proof why you will leak PR even if you use noindex or disallow, you are Dead Wrong!
"I like to think of Websites as a glass of linkjuice"
Ahhh - but is PR 5 half empty or half full?!
But seriously, I think where you say this (in the text):
"raw link juice is one of several hundred factors in the algorithms at the major engines, don't be surprised if this tactic has little impact on competitive rankings"
I think you hit the nail on the head - always bear your other ranking factors in mind when looking at controlling pagerank and don't sacrifice overall relevancy or 'optimisation' of your site just to conserve a little linkjuice.
Nice comprehensive post Rand and hope everyone at the mozplex has a great thanksgiving!
And of course, you want to keep your glass filled with the all-natural, organic juice... that paid-for stuff can go bad on you faster than milk sitting in a car on a hot Summer day.
Lol - organic only link juice! Classic. None of this GM modified paid for linkjuice rubbish.
Sigh - I remember when you used to have to squeeze the links yourself to get a glass of linkjuice. These days it's all about linkbait and factories and paid this, nofollow that.
Great video Rand, and Happy Thanksgiving to all the mozzers!
One thing to highlight for some of the beginners watching -- 'nofollow' DOESN'T mean that Google isn't going to spider the page that you're linking to, it simply means that no juice is passed. (In fact, Google Evangelist Adam Lasnik I believe once actually commented that a better title for nofollow would actually be 'nolinkjuice.') If you are actually linking to something that you don't want to INDEX, definitely use nofollow on all links to that page, but be sure to put a meta NOINDEX tag on the page itself.
<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"> or<META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW">
... or block with Robots.txt
or if you want to be extra sneaky (and risk being penalized), pop a javascript window up which disables the content if the user isn't logged in, a la expert exchange
David,
good points, although the follow versus not follow seems to have become more gray.
That is certainly how I've always interpreted it and how I think it was intended, but I've read different comments here and there lately that seem to make it more ambiguous.
For instance, item #2 response from Matt Cutts in this post...
and I was sure that I read an even more pointed statement that indicated that they truly aren't following the links, but can't recall where now.
Again, this may simply be interpretation, and it may be a shift, but this is one of those areas where I'd love to see the engines come out and provide their definitive take on the matter, just to make sure we are all on the same page.
Thanks identity, I hadn't heard that the engines had changed their stance on this...it strikes me that Adam's comment came out around the time of SMX Seattle though I can't say for sure.
At any rate, better safe than sorry to NOINDEX the page that you don't want to be indexed. :)
Dave,
Check out the article to nofollow at Wikipedia. It shows a lot of information about it and clarifies a number of points mentioned during this discussion. Cheers!
A shameless plugin of my own plugin for WordPress which helps you control quite a bit of this, by allowing you to nofollow paged links etc: robots meta plugin.
Great Whiteboard Wednesday, thanks for diving deeper into the control of link juice. One tactic I have used to map out nofollow strategy for cients is to develop a plan in Microsoft Visio. It helped me to get a visual arrangement of a clients pages and draw out the linking relationship between pages.
Happy Thanksgiving.
Using No Follow with inour site We can divert the link juice from https://www.clocktowermedia.com/privacy.cfm (PR5) to help more important pages like https://www.clocktowermedia.com/portfolio.cfm (pr5).
thanks....Good tip!
I am not exactly an SEO expert. Just getting started at this stuff, but... If I had links to pages that I though were boring or useless on a page, I think I would either eliminate the links or modify the pages so that they were not boring or useless. Seems like that makes a lot more sense than trying to implement some kooky "control the flow of link juice" strategy because that is the new buzz term in SEO. Seems making your web page more usable by not linking to boring useless pages would benefit your visitors and by extension you a lot more.
I think when Rand says "boring" or "useless" he refers to pages that you need to have on a site for legal issues, but they are not really useful for visitors. Good examples are: terms of service, privacy guidelines and copyrights statements.
Thanks Rand for intresting post.
I've one question regarding the same.
Do we use the technique while syndicating content?
Coz, we submit our content to various resources and we all want back link from the same. SO is it good to use nofollow tag while syndicating content and only allow a link that is most important.
Rgds
Ramesh
Thanks Rand for this. I understood the basic concept before, but it always helps to see it visually.
As for the debate surrounding the leaky links, I believe that in order for the whole concept to work, the links must leak juice. How can it be any other way if the rest of the concept is true? There does seem to be a point at which you cross the line if you put to many outbound links on a page. After that, there's no more juice to flow out and all the outbound links suffer. This would indicate that adding more links to a page does in fact leak more juice from that page.
So in other words, i think you are right with your analysis Rand.
I would agree with Rand that controlling the flow of link juice with nofollow should be done slowly and carefully.
We implemented nofollows on a small site (100 pages or less) 2 months ago and we haven't seen any results. Actually, the site drop 1 position on the two main keywords for the two pages we tried to "push" with this technique.
Rand, thanks for providing the diagrams as well. I know you stated that you see the most impact on larger websites. For sites that have just a few hundred pages (under 250), do you find there is any benefit to looking at individual links to determine which would benefit from nofollow?
Rand, you mentioned that you've had what you thought to be a smart nofollow link strategy on a site backfire on you. Do you know why this was? I recently have had the same thing happen to me on a client site.
I am debating on taking the nofollows off to see what happens.
I'm not sure I'd say the nofollow implementation was that smart. Basically, there were nofollows pointing from some important pages to some of the category and sub-category pages, which, while it gave more juice to the other detail pages those pages linked to, it flowed less juice as a whole through the site. Basically, I think nofollow is good when you can use it to get more juice to the long tail, but generally not worthwhile if you're trying to conserve juice for competitive pages in the head. PageRank/Link juice simply isn't a big enough part of the algo to make a big dent for highly competitive terms.
Rand, I think you are right, I put rel="nofollow" on my category links for affiliate pages and I do not see any change ein the SERPs.
Usually, a positive increase in the algo reading is reflected instentaniously in the SEPRs.
I read that Google has two crawlers, one crawls twice a month and another may crawls continiously depending of the popularity of the Website - page.
My PHSDL hotlist page is updated daily in Google cache.
So as you saying the PR dispertion will not be applicable to semi-dorment sites, but will kick in for ultra-live sites, reflecting in long tail...
I like to equate this to a sociological prospective, being that Google PR algorithm reflects society.
If you apply some media attention to a reletively anknown person the effect is manrginal at best. But if you shine a spot light on a celebarty, it is translated into public opinion almost instantaniously.
So the more active and popular is a Website or a page, the faster and higher the GPRJ is rerouted with rel="nofollow" or any other SEPRs atribute.
Rumors spread on wings for Kings.
Rand . . . kick ass White Board Friday post. It touched on one of the lesser known SEO techniques and still didn't get too 'deep'.
I had OneCall.com institute some nofollow links on the site a couple of weeks ago. The goal is to dial-in the rather significant problem we had (and still have) with duplicate content on the site (sort orders, all results, etc.) and to keep us much juice as we can flowing to the correct pages. For example, most of the customer service pages I nofollowed because I could careless if Google indexes those pages. Sure, I could have Meta noindexed them or even robots.txt'd them but I felt that was a bit harsh because the site links we have on Google may be helpful for some of the users and I didn't want to hinder any of that.
What are the results? Well, season hit so it's really tough to tell. I can tell you that our search engine traffic is climbing steadily and faster than any other source (yeah, it is exciting as hell).
Feel free to poke around the OneCall code and let me know if I made some mistakes and why? Be critical . . . I can take it.
Brent David Payne
Thanks Rand,
It is really nice way of explanation
Rand..its a wonderful post. but I am not able to see the video (may be bcoz m using LINUX operating system. Please see if you can fix it from your end or send me its youtube url. keep rocking !
My goodness.... 100 comments... How can I read 100 comments?
Rand, this is a great post and great discussion here. thank you! I agree with your point that "PageRank/Link juice simply isn't a big enough part of the algo to make a big dent for highly competitive terms."
I just read the 100 comments, Gary! :)
wts the best way to stop " terms and condition" page to get any link juice?
of course you can nofollow these types of pages.
I know I am awefully late on this but how would you track the impact of implementing a do follow/no follow strategy using GA?
Hi
All Webmaster, Can i add rel=nofollow tag at home page .. how google will treat my site as gud or bad ??? i am in SEo Field
P. chauhan
https://mystery-shopping-online-jobs.blogspot.com/
https://mesothelioma-symptoms-diagnosis.blogspot.com/
Again a great whiteboard friday including some clarifying visuals, thanks!
Rand, first off I think this is a great visual description of the 'flow' of links through various pages. I am curious as well though, if it will have much of a noticable effect when implemented. I'm working on a medium sized site right now which has a ton of internal links on some of the stronger pages. Many of the links are pointing to various product pages and other pages that don't really need that link juice. I'm eager to play around and test this 'flow of links' theory out to see if any link juice can actually be conserved by adding nofollow to a lot of these internals that don't need to be passing value.
i completely understand the controlling link juice flow with nofollow concept. i believe it can work and know that many are using nofollow this way now, but i still think of it as being used for spam as it was introduced to us as a spam fighting tool.
i just can't get myself to do it. it feels dirty to me to use nofollow on my own site.
lol I know what you mean..
Like most of us we have a contact us link on every page. to no follow this would make sense then?
no follow means dont trust or dont pass pr? It certainly used to mean the former
if it means do not trust then it should not be used on your site.
If it means do not leak pr then I am guilty of manipulating.
So seems quite obvious to me that a site is taking big changes using it at all..
Nice post - I really like the images - it explains it far better than my previous ramblings on this subject have been able to: have just forwarded your post round the office - hopefully they'll stop thinking that I'm losing the plot when I speak about developing PageRank Funnells within a website.
I thought there was an idea that I heard at SES San Jose this year that using "no follow" is like telling Google ..."Hi, my site is being SEO'd" since a lot of people dont even know that this tag exists....
I think that Aaron Wall mentioned this before. Correct me if I'm wrong, but I believe it has more to do with putting nofollow on external links rather than internal ones and not linking out to any sites. I think Google's point was to add nofollow to advertisement type links. At the same time I can definitely understand the arguement that nofollow's could be a red flag that "hey, this person is trying to manipulate pages"I'd be interested in anyone else's point of view on this as well, because I'm not sure what the best method is here.
Rand,
Another great WBF!! The video really made sense to me. Happy Turkey day to the whole staff!!
Great post rand, I agree this subject deserves a lot more attention. I think you are exactly right about "margins" and internal links.
One of the most interesting topics related to SEO for me is "link juice" and how it all works. So I really enjoyed this WBF. Thanks
Very interesting post. Can't wait to see the follow-up on this.
thumbs up just for using spoon! i turn my camera on, i cut my fingers on the way.
I have a link juice problem that I don't know how to approach.
There are multiple small-medium sized content site. They have a pyramid-shaped linking scheme. So a top level navigation that in turn spreads out and then spreads out some more. The top level pages are the major content pieces - the pillar articles. But since every page anyway links to them as part of navigation, it's tricky to distribute link juice.
If I add several links from one page to another page, it probably won't make a big difference - it's bound to dilute or not give any extra juice at all.
So how should I approach it? Nofollowing navigation is silly, so something else is in order. Has anybody tried solutions on a similar question?
I would start by reading Dan Theis SEOFastStart ebook
Then move on to Revenge of the Mininet and the bonus online ebook on Dynamic linking by Leslie Rhode.
Just Google for them, they are free downloads / access.
Of their linking structures one to look at is possibly a Spider Circle, but be careful of external links.
The more you optimize linking structure, the more you have to be careful of link flow away from your own site, as it isn't hard to create a sacrificial linking structure.
Rand, I have to disagree with a point you made. You indicate that adding additional outbound links on a page increases the total aggregate amount of Google juice passed on to the other pages. This is not how the pagerank algorithm works.
The amount of link juice passed from a page to the pages it links to is set in stone. Adding additional links dilutes the juice among the links (reducing the value of each individual link), but it doesn't change the total amount of juice. The only way to reduce the loss of juice is to have no outbound links of any kind (a pagerank sink), making the loss 0.
I have watched the vid twice now and read through it a couple of times and I can't find Rand saying that "adding additional outbound links on a page increases the total aggregate amount of Google juice passed on to the other pages". However, he does make the point you make about the dilution effect.
Secondly, saying the link juice a page has is set in stone does not make sense to me. The juice a page has is a result of multiple factors affecting it which vary over time. One of those factors is links to it i.e. the whole point of the WBF.
Or have I completely misunderstood your comment?
I was referring to the following lines:
The size of the red area is also increased.
Saying it does or it doesn't - both are or at least could be incorrect.
Traditional PageRank or link juice patents are based upon cyclic calculations, thus everything depends on linking structure to both internal and external pages.
If when linking out from a page the juice doesn't ultimately flow back to the originating page due to external leaks, then the link does cause a drop in PageRank on the originating page.
David - I certainly tried not to make that point! In fact, I was attempting to show quite the opposite - that a fixed amount of juice is given to a page based on what's linking to it.
Now I finally understand the concept behind have nofollow links to other pages on your site. Thank you for the "visual" descriptions.
I just looked at the title of th post and thought I had lost 2 days this week :S
Nice. I thought I had grasped link juice flow before this but no, I wasn't quite getting it. The video was great but for me the written and illustrated explanation nailed it. Big thumbs up for the dual format.
And WTF is tofurkey :) Sounds like it belongs in a banquet devised along these lines.
Now, the pointer. Either that was your crack pipe or your car is not gonna start until you re-insert that. You'd be far better using the STUTDoP.
Finally the tune. It was OK. Kinda funky. I still don't know if you take requests but I'm going to keep making them. And as a lot of this stuff is still a bit of a grey (correct spelling :p) area for me I'm going for "Puzzlin' evidence" by Talking Heads.
Oh and of course I wish our American cousins a happy Thanksgiving.
Tofurkey is tofu turkey. The choice of vegans and vegetarians.
Tofurkey is tofu turkey. The choice of vegans and vegetarians.
Nice to see you again..:)
In my quality of SEO beginner, this post wake me up! Finally, I understand why having a lot of outcoming links from a page is not a goog thing. Also, there is a very useful use for nofollow different to say to Google: "Please don't drop my page rank!"
Thanks!
Great WBF/W, and yes, this condensed version could probably be broken out into a complete series.
Happy Thanksgiving wishes to all the moz and everyone else. And thanks for taken a break from posting.... I'm seriously behind, so having a chance to get caught up would be one more thing to be thankful for!
Great WBF Rand!
The illustrations were especially helpful. This couldn't have come at a better time seeing as that I'm starting to implement this on our site as we speak!
- B
Rand,
Great video. The clarity of thought in your explanation on using the no follow tag helped me to finally understand the concept. I know you can talk on this topic for a long time, and I think more videos and posts on it, would be an SEOmoz favorite. Thanks and Happy Turkey Day to the SEOmoz community!
Thanks Rand
This is the clearest description/explanation of link juice I've ever seen.
and also...
Happy Thanksgiving!
Rand good explanation and great analysis.
I wa able to help my site from leaking link juice by disallowing affiliate content pages in my robots.txt
My business Website is old and does not have many new inbound links, so I have to guard my link juice.
Before I have down this, all my affiliate pages where in supplementary, so was not getting hits on them, after the tuneup with disallow my overalll site performance incrased by 30% and holding steady since than.
If I had higher PR and many new authoritative inbound links coming in on regular bases I would not have to disallow the pages, because the link juice would put them in the Google main index.
Also this is an exelent example of what rel="nofollow" can be used for - to protect a Website internal PR for your own pages.
I think your recommendation of going slow and experimenting is a very good idea.
A bit asque, using rel="nofollow" to guard PR leakage for your Website outbound links is not garaunteed, so if you do not trust a site do not anchor link to it!
Rand do not it too much Turkey otherwise you will havee to go on a diet when you come back..:)
Have a nice Thank Giving,
Igor
"I wa able to help my site from leaking link juice by disallowing affiliate content pages in my robots.txt"
robots.txt won't prevent link juice leakage; disallowed URLs still accumulate PageRank.
Okay, how can PR leak, if you tell Google not to index a page with a disallow command?
Does not rel="nofollow" means noindex?
Well I have heard of double condom recommendation, but I do not subsribe that you need it.
Hey better safe than sory..:)
HalfDeck, actually you may have hit an interesting point.
Let's say, I disallow a page in robots.txt and its stops the link juice flowwing from my other pages where the links is, on my Website, to the disallowed page.
Attn: Just put aside the argument if a page leaks juice after disallowed or not!
What is interesting, if rather you use rel="relnofollow" on Your Website to page "A" other Websites may link to this page and give you GooGoo juice.
So by this analogy, better to rel="nofollow" and not lose your juice, but get others juice!
So, now after all this time we are learning the real usage for rel="nofollow" not the intended use...:)
Great point, HalfDeck
Extremely helpful...thank you very much!
Definitely going to implement some of this on our real estate marketing blog
I think I speak for a lot of people when I say that I gained a better understanding of the use of the nofollow to contain page rank and pass juice to pages that need it.
Thanks again and Happy Thanksgiving to you too!
Rand, I usually have the attention span of a flea and can't watch all the way through without fast forwarding at some point, this was a great visual illustration, and to be honest not something we use as much as we should. I could really see this being helpful in blogs (which can develop tons of duplicate pages, comment pages, month pages, etc by default) and e-commerce sites too.
As usual, thanks for sharing!
--Wil
Great post Rand. The visuals do wonders.
Hi Rand,
Its nice post.Would you suggest to use no-follow on even smaller sites. By smaller I mean with total number of pages less than 100 ?
I am new to SEO world. So this might look to you very basic question.
I can see the comments you add to yahoo answers, no-follow is added to any links you post there. Then why google counts that as incoming link to your webpage? I have seen in Google webmaster tools, its picking up links from yahoo answers.
Could you comments on this ? Does links in yahoo answers gives you some juice or any value ? If you get some juice from yahoo answers then whats use of no-follow ?
Hey Sahota, I'm new to SEO too but my understanding of this is as follows:
When you check back links with Google and Yahoo, they return the incoming links to your site (Yahoo returns these with far greater accuracy), without recognizing the nofollow attribute. This is NOT the way the actual algo works.
In other words: the list of back links a link:domain command returns shows everything that LINKS to you, not everything that is passing link JUICE to you.
So the nofollowed links in yahoo answers wouldn't be passing link juice, but Yahoo answers and Wikipedia (which also attaches a nofollow attribute to external links) both get a ton of traffic: this can offer you branding and traffic benefits (assuming your participation reads as being useful/informative/well-written, etc.) that seem to me like pretty powerful reasons to participate at these two places.
Hope that was clear/accurate...
Thanks for your reply. I agree with you regarding Branding and traffic.
But strange thing is when you check INCOMING links to your pages in Google Webmaster tools, it picks up YAHOO answers links to your website. So if yahoo answers uses no-follow , why google is considering that as link to your website. Does that mean you get some link juice. To me if no-follow is applied to a link, google crawler should not pick up that as incoming link. Then why these links are shown in Google webmasters tools?