Important Update: Please note that the results of this test were actually inconclusive. I didn't realize this when I posted it and I didn't figure it out until it was pointed out in the comments. Please disregard the results of this test and see my new post explaining what went wrong and what I am doing about it. I am leaving this post up as it is my personal policy to not delete data.
Update: Based on some excellent feedback in the comments (Seriously, thank you everyone!) I have updated the post with some clarifications and more added data. Specifically, I added a diagram of the page setup and removed a confusing comment I made about Javascript links.
As SEOmoz has matured as a company, our SEO team has shifted away from treating SEO purely as an art and more toward treating it as a science. There is certainly the necessity for both perspectives but I believe we are now much more centered.
As a result of this shift, we have been running more tests and analyzing more data. Before I get into the topic of our latest test results, let me provide some important points to establish context.
- There is overwhelming evidence that from a "ROI on time spent working" perspective, there is much more value in link building and creating content that is link-worthy than obsessing over search engine algorithm fluctuations like PageRank sculpting. Link building is human oriented and thus more inline with the long term goals of the search engines. Links also have the added bonus of being easy to measure and thus easier to prioritize.
- We can’t directly measure how PageRank flows so we can only infer results. This needs to be acknowledged when interpreting test results. That said, we also can’t directly measure objects outside our solar system and this solution of inference has become the basis for modern Astronomy. (If it is good enough for NASA, it is good enough for SEOmoz ;-p)
The Experiment
We chose the following five PageRank sculpting methods to test:
Rel=‘nofollow’ - The standard mechanism for nofollowing a link. <a href=’https://www.example.com’ rel=‘nofollow’>example</a>
Link Consolidation - Consolidating low priority pages. You can read more about link consolidation here.
Iframe - Include a standard link in an iframe that is blocked via robots.txt or meta robots so engines can't follow it.
Javascript - An external Javascript file (blocked from robots) that inserts links into divs when the page renders.
Control Case - Null test with standard links.
Page Setup
We then built five standardized websites that used these different methods (one used iframes for its test links, another one used Javascript for its test links, etc..) and included one normal link with the anchor text of a phrase that was completely unique on the Internet.
Each website in the experiment used the same template. Each keyword phrase was targeted in the same place on each page and each page had the same amount of images, text and links.
The standardized website layout contained:
FourThree pages per domain (the homepage and the keyword specific content pages)- One internal inlink per page (Links in content)
- One inlink to homepage from third party site
- Six total outbound links.
- Two "junk" links to popular website articles to mimic natural linking profile (old Digg articles)
- One normal link to keyword test page
- Three modified links (according to given test) to three separate pages optimized for given keyword
- Links to internal pages only came from internal links
- The internal links used the anchor text (random English phrase) that was optimized for the given internal page
- Outbound links (aka "junk" links) used anchor text that was the same as the title tag of the external page being linked to (Old Digg articles)
Example Test Website
Please note that the above example was NOT actually used. I provided a fake example to maintain the integrity of the testing platform for future tests.
The experiment variables were:
- links (based on experiment type)
- colors
- photos (although alt text was standardized)
- text (randomized text based on proper English grammar using a standardized word-set)
We then did everything we could to make sure that all of these pages received the same amount of link juice from external sources.
The null result would be a random assortment of experiment types ranking in the SERPs.
The alt result would be one experiment type outranking all of the others.
Redundancy
We then duplicated this experiment eight times in parallel. This meant 40 different domains, 40 different IP addresses, 8 different WHOIS records, 8 different hosting providers and 8 different payment methods. (We then went outside and drank)
We ran this test for 2 months.
The Results
PageRank Sculpting Method | Average Rank in Google |
Nofollow | 2.4 |
Link Consolidation | 3.0 |
Iframe | 3.1 |
Javascript | 3.2 |
Control Case | 3.2 |
Rank | Test 1 | Test 2 | Test 3 | Test 4 | Test 5 | Test 6 | Test 7 | Test 8 |
---|---|---|---|---|---|---|---|---|
1. | nofollow | nofollow | control | nofollow | consolidation | iframe | nofollow | control |
2. | javascript | iframe | javascript | consolidation | iframe | consolidation | consolidation | iframe |
3. | consolidation | javascript | nofollow | iframe | nofollow | control | control | javascript |
4. | control | control | consolidation | javascript | javascript | javascript | javascript | nofollow |
5. | iframe | consolidation | iframe | control | control | nofollow | iframe | consolidation |
Please see my update at the top of this post. These test results are actually inconclusive.
As you can see, the nofollow method ranked an average of 1 place higher (0.7) in the SERPs than the control result. This is significant when you realize the total is out of 5.
It appears that the iframe method and link consolidation were slightly effective but the margin was so small that they could be contributed to error.
The Javascript method did not work at all.
The Bottom Line
Despite what the search engine representatives say, nofollow is still an effective way for sculpting PageRank. If you have nofollow sculpting already installed, don’t remove it. If you don’t have it installed, implementing it probably won’t make a drastic change but we encourage you to test this when it is responsible to do so.
I invite you to share your interpretation of these results in the comments below. As with any experiment, these results are not valid unless they can be reproduced and stand up to the critique of others. What should we do differently in future experiments?
Oh wow... I just took a serious look at the data in the table and realized how bad this post really is. If you take that data and rework it a bit, you can get something like this: 1st 2nd 3rd 4th 5th 4 0 2 1 1 nofollow 1 3 1 1 2 consolidation 1 3 1 0 3 iframe 0 2 2 4 0 javascript 2 0 2 2 2 control This version of the data table shows the number of times (out of the 8 tests) that each link method ranked 1st, 2nd, 3rd, 4th, or 5th. Here are a few statements that can be made, based on this data: 4 out of the 5 link methods were observed as being both the best AND the worst. None of the linking methods were ranked in any one position more than 50% of the time. The alleged winner (nofollow) only came in 1st 4 out of 8 times, and it never came in 2nd. The other 4 results were spread over 3rd, 4th, and 5th places. Think about this: if all 5 of these linking methods work equally well--even if that means none of them work at all--then the distribution of rankings would be random, resulting in an "average ranking" of 3 (1 + 2 + 3 + 4 + 5 = 15; 15/5 = 3). Theoretically, if this experiment was totally worthless (i.e. it resulted in a random distribution) and we repeated it a gazillion times, our data would be this: 3.000 nofollow 3.000 consolidation 3.000 iframe 3.000 javascript 3.000 control The data from the actual experiment is this: 2.375 nofollow 3.000 consolidation 3.125 iframe 3.250 javascript 3.250 control Looks pretty damn similar to me. I should also point out that you've completely misunderstood the scientific method. A valid experiment is one which can be duplicated...including the outcome! You ran the experiment 8 times and never saw the same outcome. You could NOT duplicate your results. At all. Not even close. Instead of scrapping this entire project, you averaged out your radically-different results and drew conclusions from them! It wasn't until I actually looked at the data and your calculations that I realized...you have literally PROVED that this experiment and its findings are completely invalid. I can't believe I was so preoccupied with battling stupid comments that I completely overlooked this. While I'm ranting, I might as well throw this out there... Rand, earlier you wrote: On the specifics - Danny project managed, designed and recorded results for us on this one, so I'll let him speak to the specific points. I do agree it would be great to see the avg. vs. mean and maybe even all the ranking numbers in a spreadsheet doc - I think we can do that in the next 24 hrs. BTW - Wanted to add how proud we all are of Danny on this. This project was a TON of work and he put a lot of effort into ensuring we came up with quality data that was rigorous in methodology. Obviously, SEO testing is a very hard thing to do, and the results of this test may not be perfectly applicable to every scenario, but it's fascinating, worthwhile and remarkable. Keep up the good work! Honestly, I have nothing against Danny. I do believe he's a bright kid. But this post is just one more instance of him publishing bad information on the SEOmoz main blog, with you on the sidelines cheering him on. If I want to read amateur posts about SEO, I'll head over to YOUmoz. But this was written by one of your employees and posted on the main blog, under your direction. There's absolutely no excuse why you let this get published. I know you're smart enough to realize that this entire thing is garbage and that Danny clearly bit off more than he can chew. PageRank sculpting is an advanced topic, and it's one I happen to be very passionate about, yet you've casually tossed it into the hands of a novice and then kicked him under the bus. Danny's original remark about the JavaScript link results was a dead giveaway that he's not ready to tackle a project like this...regardless of how enthusiastically you shake your pompoms at him. I offered you this post at a reasonable price and you declined, opting instead for the generic equivalent. As far as I'm concerned, you made a conscious decision to provide your readers with second-rate information. I hope your personal agenda was worth misinforming 55 thousand people. (on edit from Rand - didn't change much, just removed some potentially offensive personal attack content - this is a good comment and we like to hear this stuff)
Great comment there, thanks for putting everything into perspective. Up until now I thought everyone was complaining for the sake of it whilst I just watched from the sidelines.
I retract my previous 'pinch of salt' comments and completely disregard this study - not because I don't trust Danny or have any judgements of the guy, but because of your comment and analysis here.
Thank you for clearing everything up.
--
Just a small question. Is 'PR Sculpting' something that is used by many major SEO companies or is it something that is performed by just SEOMoz? The reason I ask is because I'd like to know the worth of it during my SEO process in the future.
NB: I'm not debunking anything that SEOMoz or their staff are suggesting about PR Sculpting. It's just nice to know the facts behind it. If someone could point me to an interesting article, that would be a complete bonus.
Nail in the coffin:
Even assuming that there are only a few variables, that's what, a margin of error of 20% or so?
That means that if you add 20% to 2.3 and subtract 20% of 3.2, you've essentially flipped the results, right?
If you want to really bring on the science part, then show us what you believe the margin of error is and why backed up by the math.
FYI: You fight for your ideas on statistics, expect to get hit back with statistics.
<snark>And he actually used the loaded term 'significant'</snark>
Michael Martinez take note. This is how you challenge an article if you disagree with its findings.
That's too right. Even though what Mr Martinez was saying and arguing about was true, being argumentative and aggressive only made people oppose him even more.
I suffer from a similar problem sometimes, but it's down to the way I write emails. Apparently I'm very dry and formal, whereas in person I'm very friendly - it's a strange habit.
My advice to Mr Martinez is to get out of that habit, because it makes people oppose him which isn't fair at all!!! Especially if what he's saying is valid.
To quote my favorite transvestite marathon-running British comedia, "And, as I say, 70% of how you look, 20% of how you sound, only 10% is what you say."
That's a great quote and too true!
Just wondering who this comedian is. The only one I can think of is Eddie Izzard.
And you would be right, my good sir!
In that case, Michael Martinez seriously needs to update his profile picture then. That smug looking 'holding the chin' photo is so lame. Did they tell him to pose like that at the kmart photo studio?
The eyes! Big eyes! Then keep confirming and denying things, that's the trick!
Five GOOOOOOOOOOLD rings!
Hey Darren - I appreciate the effort you've gone through here (though some of the tone isn't entirely appropriate - I edited a few phrases/adjectives re: special olympics, etc). I clearly should have been more on top of this and we should have done a post-mortem with some engineering folks internally before publishing.
Looking at what you wrote, I'm not sure the "avg rank" is the best way to be judging this (and that obviously takes a stab at how it was initially presented too). If I ran 8 tests and 4 suggested that the best way to get the results I was seeking was to use nofollow and 3 of the other 4 suggested I could at least get positive results in the direction I wanted using it, then I'd be inclined to believe it works. However, I don't want to state anything definitively until after we meet internally and go through the process, results, etc. At that point, we'll likely do a second post on the topic.
BTW - when thinking about this more, I also realized that Michael may have some points about the nomenclature used. It's very possible that in fact, when trying all of these excercises, we're observing non-PageRank related link metrics bolstering rankings from the nofollow/iframe/consolidation/etc. I'm inclined to believe Google when they say nofollow leaks PR, and there could be dozens of variables that pass through links that are impacting rankings, so maybe calling it "pagerank sculpting" is what's arousing Michael's ire. In any case - worth more thought and more careful writing in the future.
Looking at what you wrote, I'm not sure the "avg rank" is the best way to be judging this (and that obviously takes a stab at how it was initially presented too). Of course average rank isn't the best metric! It's an absurd metric! Which is precisely why I immediately reorganized Danny's data into something more meaningful and wrote things like: "Instead of scrapping this entire project, you averaged out your radically-different results and drew conclusions from them!" If I ran 8 tests and 4 suggested that the best way to get the results I was seeking was to use nofollow and 3 of the other 4 suggested I could at least get positive results in the direction I wanted using it, then I'd be inclined to believe it works. WHAT!!!? I'm almost at a loss for words. Almost. How could you POSSIBLY interpret 4th place as "positive results in the direction I wanted"? This isn't the New York City Marathon we're talking about here...this is a competition between FIVE linking methods. 4th = second to last. BTW - when thinking about this more, I also realized that Michael may have some points about the nomenclature used... ...maybe calling it "pagerank sculpting" is what's arousing Michael's ire. I scanned this post for any comments where Michael may have expressed a valid point, and I couldn't find any. Regardless, Michael is not an expert on the topic and doesn't cite his sources or backup anything he says. Therefore, I think it's in everyone's best interest if we carry on with our SEO endeavors without regard for his proprietary jargon.
I think the big take-away from this is to not just run out and change your site every time Google says something. It's been 6 mos since Google made that announcement, perhaps a year or more since they claim it stopped working. It appears from this it still does work. So? Do your own testing. See what works best for you.
Edited for spelling
I usually go by the motto "only do what you would if Google wasn't around".
If you're making a change to a website just because of Google (other then just poor usability or accessibility), it might not be the best change. If you can back up the change with other solid reasons, then go ahead.
My priority list is always:
- visitors
- search engines
and never the other way around.
Completely agree with yourself and Saffyre here.
Google wants us to think that they're the big-wigs making the final decisions when it comes to SEO.
What they'll try and do is give us some sort of mis-information or part of the truth in the hope that we'll change the way we code our site, gain links and so on and so forth.
What they don't know is that we have our own brains and try/test anything that they tell us. So big respect to these guys for running these tests for two months!
I also second what IgniteMedia says regarding priorities. I know there's a post by Rand about content not being directly related to good rankings, but content, usability and accessibility should always come first.
I have to challenge your title - "PageRank Sculpting with Nofollow Still Works"Your test does not show that "pagerank sculpting" works, nor does it support the idea that "pagerank sculpting" ever worked. Since seomoz was the main promoter of "page rank sculpting" before it was debunked, there is already a bias in research published here (since it can be argued that seomoz has a need to defend the prior claims after they were debunked by Google).
Don't get my intent wrong -- I love to see testing... but the testing should be valid, or it risks extending the dis-information instead of correcting or updating it.With any test, there is variance in the measures. You measured rank to prove your theory (and support your aggressive title). But there is a variance to rank. It can change day to day, and it can change due to various factors not controlled by your test. It is even likely to change across your tests sites, since it is impossible for your link building to be exactly "equal" across sites with different keyword content, different hosts, etc.
Again, I am not saying your test is junk. I am simply saying that there is variance in your measures that has not been accounted for and is likely to be at least as large as your measured "results".Note that such variance is unavoidable; we all have to live with it. But when doing tests, we have to take into account, so that our measures are reliable and meaningful.
Variance is often used in a derived form known as "standard deviation", when normal populations are studied. When voters are polled, for example, the pollsters report a +/- margin of error, which is based on the variance in the measure they use to calculate the voting. They often consider how the people they poll differ from the general population, for example. When they say that Francis is leading Terry 48% to 51% with a margin of error of 4%, we all know the race is too close to call. There is no statistical difference between the counts for the two candidates (even though the exact counts differ by 3%), due to the margin of error (caused by variance in measures).
Your test reported "the nofollow method ranked an average of 1 place higher (0.7) in the SERPs than the control result", but did not report any variance measures. Given my experience with SEO over the past decade, I am certain the variance is too high for that difference to be significant. Your test could actually show that the nofollow case performed worse than the control, using the very same data you reported. We have no way to know. Your title "PageRank Sculpting with Nofollow Still Works" is in my opinion, irresponsible. There is no doubt that it is not supported by your reported data.
You also stated "This is significant when you realize the total is out of 5" which I don't understand since we have no way to know if a 0.7 difference is significant or not on a scale of 5, lacking any knowledge of variance. I'll grant that it sounds good, but we need more than that when we claim to do testing. Often a "control" is added to an experiment in an attempt to balance out some of the out-of-control changes, assuming that either they can be accounted for or they are uniform enough across all test conditions (statistically) to cancel out. You have a "control" in this experiment. Regarding that, I have one question -- how does one control for variations in rank across 40 sites on 8 hosting providers, using different unique keywords, over 2 months time, on Google? Even if I think really really hard, I can't imagine how your control could be a valid control for this data as reported. If you could answer that question.. or if seomoz could publish an accurate answer, I'd be very, very impressed with the caliber of SEO information published here.
It is also remarkable that your javascript test showed the exact same value as your control, given the fact that there must have been a good deal of natural variance in the SERPs when you worked across 40 sites on 8 hosting providers, with different keywords, over 2 months time. You are either a very lucky guy to have seen that coincidence happen during your test, or there is a bigger problem in your testing method. In fact, in my experimental laboratory work, such a result is so unlikely to happen naturally that I would consider it a major clue that something is very wrong with my experiment. My sensitivity... my ability to accurately measure the very changes I am seeking to document, is suspect. Again variance is involved, along with error. Perhaps it is a matter of precision... I can't tell. There isn't enough evidence. But I can bet with very good odds that this experiment as reported does not support the claims let alone the inflammatory title.
I offer these comments in response to your invitation "I invite you to share your interpretation of these results in the comments below". I felt it important to make these notes because you stated right up front "our SEO team has shifted away from treating SEO purely as an art and more toward treating it as a science". You also told your readers "Despite what the search engine representatives say, nofollow is still an effective way for sculpting PageRank".
Therefore this blog post claims scientific authority, directs readers to act in opposition to advice offered by the search engines themselves, and does so irresponsibly (lacking adequate basis for support and using language constructs to influence acceptance of findings as valid).
I hate to think that seomoz has become just another seo disinformation web site.
Could you format that into paragraphs so I can read it without my eyes going crazy?!?
It had paragraphs when I posted it.... not sure what happened there.
Lol, agreed!!! Big time usability FAIL on that comment! :) #commentfail
It would be nice to have the power to remove my comment in a case like this. The formatting issue on John's comment was fixed now so my comment and chenry's comments don't make sense anymore... =/
Perhaps you shouldn't have bagged on it in the first place rather than being that guy then asking to have the abiity to take it back. Regardless of formatting the comment was well thought out and awesome.
Hi John,
Thank you for your input and refresher in basic statistics. I appreciate the critical eye.
As mentioned in the introduction to this post, the best we can do with search engine testing is infer. I made that perfectly clear at the beginning of the post and thank you for bringing it to everyone's attention again. It IS an important point and deserves to be highlighted.
If you have any specific changes you would like me to include in future tests, please let me know.
Hey John - great comment and much appreciated. I actually edited the title to reflect what I think is a very valid critique. The title wasn't commensurate with the post in reflecting the message we were trying to send.
On the specifics - Danny project managed, designed and recorded results for us on this one, so I'll let him speak to the specific points. I do agree it would be great to see the avg. vs. mean and maybe even all the ranking numbers in a spreadsheet doc - I think we can do that in the next 24 hrs. :-)
BTW - Wanted to add how proud we all are of Danny on this. This project was a TON of work and he put a lot of effort into ensuring we came up with quality data that was rigorous in methodology. Obviously, SEO testing is a very hard thing to do, and the results of this test may not be perfectly applicable to every scenario, but it's fascinating, worthwhile and remarkable. Keep up the good work!
John,
You raise some excellent points. There is not enough information present in this post to draw conclusions. However, I have had the opportunity to converse with Danny, and am confident in his statistical savvy.
In order for us to really put this matter to rest, SEOmoz would have to publish the full results of this study in a research paper format (I suggest APA). It would then be available for scrutiny by peer review. The paper must be comprehensive enough that other researchers can replicate this study and confirm its findings.
Research methods don't end with the study. Excellence in reporting results is just as important as creating and conducting experiments.
I read it all :o)
With fear of getting seriously battered...
Given the fact, that there are about a million factors you can't control, what about trying to improve the experiment, when the data becomes available here at seomoz ;o)
Since your waaaaay beyond my league i't's more a question, than a suggestion:
Would the same test make any sense or just "fuck it even more up" (sorry, i'm pretty forward, lacking the 'be nice when pointing out' gene) if you added SERPS from various geographic locations in the world (England, Ireland, Australia, New Zealand)?
It's stil english speaking countries....
I found this: https://www.redflymarketing.com/blog/google-global-view-results-different-locations/
John,
I really appreciated your analysis and wish there were more of you in the world to challenge us non-scientific SEO Experts. If you received any negative feedback or sarcasm, please don't pay it any mind. I myself can't participate in many open forums without getting verbally chastised (as you'll find if you seek me out in Resource-Zone or SEOChat, two destinations I have no plans of revisiting).
Search, like many industries, has its own culture. You were absolutely accurate in your observation of how the SEOMoz Team used the word "scientific". However, in our non-scientific world, anything outside of best practices that requires logic and math is considered science. The use of the word was not made to offend scientists, but let the rest of know that this stuff could get complicated. Call it a metaphor if you wish, but definitely don't read into the context.
We know we can't pinpoint variables with precision accuracy, but it sure is fun testing (not always fun looking at Rand's data quadrants when the test results are reviewed though).
The reason I’ve seen a movement from choosing resources in India to using resources in Manila has to do with the amount of creative thought as opposed to logic and math. I personally have tripled ranking on corporate websites by simple invoking “title tag principles” (Google that), which require call-to-action, keywords and value proposition. Tinkering with PageRank sculpting is definitely not something I’m interested in. Why have everyone detailing the inside of a house when they could be outside in bikinis holding up bright neon signs that say “Eat here, first meal free!” Content (really good content) is always the best strategy. And I’m not trying to suck up to Lisa Barone when I say that.
Puffery and exaggeration are also part of who we are as SEOs, it’s just marketing. Without the title they used in this article, I imagine the traffic and interaction would have been far less than what is. Just take a look at the amount of people who have responded in ONE day.
Challenging the word of the search engines is a bit like an adolescence challenging parents, and most of the time the parents are right; but who would we be if we didn't challenge them? Would Matt Cutts participation with Webmasters even be needed? I've personally heard Matt thank the people who challenge the word of Google and have seen multiple videos where he actually responds to questions with honest answers, sincerity and respect.
Please continue to respond and challenge opinion and testing practices. Please do remind everyone that this isn't REAL science and that there never really will be a way to be mathematically accurate. However, keep in mind that we aren't scientists, just light-hearted, passionate SEO folk trying to understand how to affordably increase traffic through inbound marketing. Lighten up a bit, and if this culture doesn’t match your taste in conversation, try hanging out with the Infonortics guys who are definitely more scientific than those of us who are a tad bit more right-brained. Ref: https://www.infonortics.com/searchengines/
Best,
Steve
I only spoke up because the article, as written, clearly claimed serious/sincere new data showed that "page Rank Sculpting" works. You can have all the puffery and hyperbole as you like... I've been around SEO to know how things go... but if you state it flat out wrong, you should be challenged.
First of all, nice work on the experiment Danny. :-)
However, I do agree with what John says. Experimenting itself is a science. It takes a lot of experience to get to know how to carry out an experiment properly. But ALL experiments do add to the knowledge base of any subject (even the ones lacking rigour). Even coming up with the right experiment to test a phenomenon isn't particularly easy sometimes. So kudos :P
That being said, I do think it would be nice to see a report on the experiment (On this one and others carried out here). That will not only help justify claims but also help others carry out further analysis/experiments on the subject.
A good book on experimentation for beginners is "Experimental Methods: An Introduction to the Analysis and Presentation of Data" by Les Kirkup. Pick it up if you ever get a chance. Costs GBP14 from amazon :P
And most importantly keep on experimenting...
I'm surprised by your findings with Javascript. You say:
whereas I would imagine Javascript to be the most effective method for pagerank sculpting precisely because the engines can't see the links.
The idea with PR sculpting is that if you have a page with 10 links, and nofollow 9 of them, then all the juice would flow only to that one page. Of course, Google came out 6 months ago and said this isn't the case, and instead, that one page would still only get 1/10 of the juice and the rest would just disappear into nothingness.
So, imagine you use the Javascript method you describe on the 9 links instead of nofollow. If the engines can't see those 9 links at all, then all the juice would flow through that one link. Wouldn't this be perfect page rank sculpting?
I'm assuming you set up your test differently than how I'm describing here, but, I thought I should raise the question as I suspect some people might read this and immediately conclude that Javascript is useless for page rank sculpting, when really, I think it's the most effective method still available.
Please correct me if I'm wrong. Thanks.
Huh, thats a thinker...
Your logic sounds right to me but the test results showed differently. Perhaps, google is treating robots.txt differently than we thought?
Or maybe Google is actually rendering the pages rather than just parsing the code? This would explain them seeing the links after executing the Javascript.
Unfortunately, there isn't a enough information to definitively decide either way.
That said, I'll keep this mind in future tests.
On this topic, you may also want to test using javascript to make links out of html elements other than normal <a> tags (such as with an onclick that redirects and css that makes a span, li, or whatever look like a link). In my experience you need the javascript to be moderately complex or Google will find/follow the links, even if they aren't real links. Putting onclick="document.location=page.html;" on a span doesn't work, but if you had onclick="navTo(5);" and 5 maps to page.html in the javascript you might be ok, of course if you have thousands of page url variations you may need to have multiple arguments and glue your url together, but that should be even more obscure for Google.
Exactly as Whitespark said, if Google can't find the links then from a page sculpting perspective something that removes those links entirely from Google's eyes would seem to be the best option regardless of how Google handles nofollow. It would seem that from your test if javascript tied the control group that Google was able to find your javascript links.
Totally agree with you. I thought everyone in the now, knew that Google was reading Javascript and Flash now. I'm shocked to see how many people in the biz still don't realize this. Lots more of them are confused about the tech and don't know it's still possible to hide links this way.
What about using AJAX to fetch the links from the server? then they aren't even in the js files?
Perhaps even use a test of whether the visitor is human before returning them? I realise that thats cloaking, but if you're doing backdoor PR sculpting then I guess its not too different....
Whitespark - I totally agree with you.
Danny - "As expected the Javascript method did not work at all (Since the external scripts were blocked, the engines never saw the links) "
Sorry for the requote, but isn't the 'engines never seeing the links' exactly what we are hoping for with PR sculpting? Why were you expecting the javascript method to fail if you thought that the links would never be seen?
I think the bigger question is... were they using pure on page javascript? Packed JS? Min'd JS? A Javascript library like jquery/mootools or some other form of obfuscation? I think we could use some more details here before assessing the validity of this test. Something smells off.
Danny, It sounds like you have been as thorough as possible with this experiment, and I commend you on your efforts. In order for me to critique this experiment or make any suggestions for improvement, I would need way more information about the setup. First and foremost...what was the linking structure of the different domains/pages? The only thing you've told us is that different pages use different link types. I have no idea what that means! How many pages per domain? How many inlinks per page? How many outlinks per page? What are the outlinks pointing to? Where are the inlinks coming from? What anchor text do the inlinks use? What anchor text do the outlinks use? Which page from each domain is ranked? The easiest way to answer these questions is probably with some kind of illustration (like this, but with anchor text and link type represented too). Rand could probably whip something up in just a few minutes. Without knowing this information, I'm just as confused as Whitespark, regarding this statement: As expected the Javascript method did not work at all. (Since the external scripts were blocked, the engines never saw the links) This statement seems to contradict everything I would have guessed about the sites' linking structures. It just doesn't make any sense. Regardless of the linking structures, the iframes and .js links should have had the same results, since they both rely on robots.txt to prevent Google from knowing the links even exist. There are lots of questions that come to mind, based on what this .js experiment data is supposed to mean. For example: Why would .js work but not iframes? Were your iframe URLs appearing in SERPs? What robots meta tag did your iframe files use? Did you view your server logs to see if Google requested your disallowed files? What function/code was used in the external .js file? How were the link-generating .js functions called from the HTML files? Why didn't I receive my SEOmoz t-shirt after you told me you sent it? Overall, I'm impressed by how many times you duplicated the experiment (especially where you found 8 different ways to pay for hosting), but at the same time...there are far too many details left out, and ultimately my preconceptions about PR sculpting methods were unchanged by this post.
Hi Darren,
Great questions. I updated the post to address them. I numbered my responses in the post the same way you asked them in your comment.
Cheers!
Hey, wait a minute...where's #7?
It's in the mail
I must say I love the new direction SEOmoz is going in. More studies like this are what the SEO community needs rather than the same old regurgitated opinions most often seen on industry blogs. Kudos to Rand and the rest of the mozzers!
I was sceptical of the nofollow issue when I heard Matt Cutts announce it at SMX Advanced. It's clear that they try to manipulate webmasters in an attempt to stop webmaster's trying to manipulate their results. Just another case of <sarcasm>Google can do no evil</sarcasm>!
It sounds like it took a lot of effort to obtain this data so thanks for all of the hard work! :)
Words can't adequately express how disappointing it is to see such a clearly flawed "test" posted with the hyperbolic title it started with. The new title is hardly better.
How about "Test Shows SEOMoz Doesn't Understand Testing... But They Understand Link Bait Just Fine"
Your post doesn't give sufficient detail about the architecture of the sites you constructed to even begin to critique the methodology, which is likely flawed.
But it doesn't matter, because the variation in rankings is not significant.
I ran a 2-minute experiment (took me longer to log in to post a comment) using Microsoft Excel.
I created 5 rows - one for each "testing methodology." In each row I used Excel's RANDBETWEEN function to pick a random number ("ranking") from 1 to 5.
The average rankings of the five rows, respectively, are:
2.875
2.625
3.0
3.0
3.375
Get back to me in a couple months and I'll run the experiment again, with similar results.
Guys, if you want to test you need LOTS of sites.
What was the original title of the post? I missed it.
That would be... "PageRank Sculpting with Nofollow Still Works"
So the new title just added the "Tests Show" part?
Interesting post and interesting comments - hats off to Danny for all the time and work involved, and to the commenters for furthering the discussion.
The only thing I'd add is a slight objection to the statement (in the comments above, I think) that all of this was done before the implementation of personalized search. It reads as if there's certainty about the exact moment that personalization was turned on across the board.
Really, all we know is when Google announced it. Some of us have been talking privately for a couple months about seeing "personalized" results even when logged out. So even though Google just announced it a few weeks ago, I tend to think it was in play much earlier -- and it should be considered in any test like this.
You can add &pws=0 to a query URL to disable Personalized Web Search. As far as I know, that has always worked and still works.
Thanks for that useful information here SEOMofo. I was searching for ways through which I could disable the personalized web search and now I learnt that there is an easy way like this to do it.
Thanks for sharing.
Nothing says "you're welcome" like even MORE useful information! So here's a bookmarklet that makes the &pws=0 really easy to use: javascript:location.href=location.href%20+%20'&pws=0'; Copy that and paste it like a URL in your bookmarks (preferably on your bookmark toolbar for easy access). The next time you're looking at Google search results and you want to refresh those results with &pws=0 added to the URL...just click the bookmark button.
Thanks a lot SEOMofo. Thanks for responding again with a great and useful information. This really helps me.
You might want to add the following to your firefox search box: https://yoast.com/tools/seo/disable-personalized-search-plugin/ Very, very useful :)
Hi Matt,
You bring up an excellent point that I didn't think of. You are right in thinking I simply assumed that logged out personalized search started when Google announced it. That was too much of an assumption as they would have had to test it first.
Luckily for me I covered my butt. All of the results were documented only after downloading a new browser (in this case Opera) and viewing the results from a new IP (I have an AT&T card that requests a new IP every time I connect).
Stepping out on a limb here but as far as I know, given the new IP and brand new browser, I don't think personalized search would have made a difference.
Again, I think my paranoia just made me lucky. You brought up an excellent point.
So, the SE Reps deliberately spread misinformation. Shocker. Thanks for putting legs on this theory, and adding some scientific rigor to it.
And why wouldn't they spread properganda? Its war, all out war!!
Hi everyone - a few quick notes about some of the comments left in this post:
Thanks for your understanding and participation - especially over the holiday week! SEOs never rest :-)
Happy New Year to all!
Good choice to make, Rand. The karma of this post didn't feel very 'SEOMoz', besides. I think Michael has made his point several times and has even posted an article on his site about this - I'm sure any further comments can be made there.
Just a note:
Some people may find what Rand and the team are doing a bit controlling and overpowering but keep in mind that this is their site and whilst we do have a freedom of speech there must always be an invigilator there to step in when the time comes.
Well done.
This statement/mentality is to your detriment. Freedom of speech is powerful. And only benefits those who are reading/listening.
Honestly, from my brief time (<2 years as a pro member and this community) with you guys, I can honestly say, that you guys tend to not embrace criticism.
I say this for a few reasons:
1. When someone says 'prove it' or 'you're wrong', they tend to get tons of 'thumbs down'.
2. Those who say 'prove it' or 'you're wrong' their comments tend to get editted or deleted. Because they are emo. (Wake up, those who have valuable criticisms are emo.......)
3. When someone is 'negative' they get filtered. As if we were all children and needed our eyes blinded to emo/child-ish comments. (I.E. semi-personal rants; or even entirely personal rants)
I doubt Moz will change. And for better or worse, I'll be watching and reading. Lots of people in this industry are very......emo. It's apart of life. They want to be heard, no matter how childish (and/or valuable) their rants are. They have their blogs Twitter, to vent.
As time goes on, the chasm between you and other SEO's will only widen. Maybe because Moz won't change. Maybe because some of them are emo. Maybe because you or them can't "agree to disagree". At the end of the day, brush it off and enjoy life.
If your goal is to be positive, it's slightly because it's censored.
If your goal is profits/controversy/linkbait, it is.
If your goal is to bring science, it's not. I can't repeat the same test. This post left out tons of data for others to test your thesis.
If your goal was to dialogue with others about your findings, it's not. You muzzled many of them.
At the end of the day, you know your goals and I doubt you'd publish your motives but I'm just trying to provide my 2 cents. Ciao. I hope Moz had a great New Year and Holiday season! :)
Hi Joshua,
I appreciate your comments, and can honestly understand where you're coming from. I think perhaps though that Rand's comment was misconstrued. A comment would never be deleted simply because it is critical of us. In fact, as Rand mentioned, we encourage that. The only reason a comment would be edited or deleted (other than spam) is if the comment is insulting another member, full of profanity or as Rand mentions "harshly negative". Obviously, these are at the discrepancy of the admin, but we don't take these things lightly. Our first inclination is NOT to edit or delete.
This is not the only forum to do this. At Pubcon this year I attended a Community Management session where this very topic was discussed. Many sites handle it differently, but they all deal with it in some way or another.
To cover the topics you mentioned:
1. We don't have any control over thumbs up or down. If someone says "prove it" or "you're wrong" and they get thumbs down, that's because the rest of the community didn't like it.
2. If you read through the comments on this post alone, I think you'll see many of those comments you mention. As I mentioned, only the extreme might get edited.
3. Again, I ask you to look through this thread. There is plenty of negativity whether we like it or not. The idea isn't to delete any and all harsh comments, but to edit if necessary. I think this post and comments proves that we don't remove all negativity. :) There's plenty of it here.
Our goal is to have an open and honest forum for our members without being hurtful to other people.
And finally, I know I'm still somewhat new(ish), but I've found the SEOmoz team fully embraces the criticism we receive, and we get a lot. :) Perhaps others haven't fully seen this, but I've found it pretty damn refreshing to work with a team of people who are first of all passionate about what they do, and second care to continue to improve.
If I screw up, tell me. There may not be anything I can do to fix my mistake, but I'll surely learn from it for the future.
This whole thread including the many twitter comments has been quite a ride for me. Thanks everyone!
I hope you all had a great holiday and New Year. I for one, am looking forward to the wonders of 2010. :)
Is it possible to compare this against a 6th group?:
* No links at all
By this, I meant that rather than have 9 nofollowed links & 1 normal link, there is just 1 normal link.
This would be 'the perfect' scuplting, and would give you an idea of the best possible result that could be acheived. Comparing the variance with other methods would be useful as well, to get an idea of how effective the other methods actually are.
Danny,
I like the concept. Put i feel there is something missing. For the sake of transparency you should have included screen shots and disclosed what your phrase was. As often said on the Internet,"Pics or it didn't happen." But, good job creating a comprehensive model.
A few questions about your test:
1. Which page's rankings were you tracking on each domain?
2. How long did you give the test?
I ask because nofollow might still work as far as a way to hoard pagerank, but still kill the rankings of other pages on the site, because the remaining pagerank evaporates. If Google et al are not consistently implementing their new rules, then the first page would keep more pagerank, but none would get passed to the 'dofollowed' links on the site.
K, now I'm just babbling. Hopefully this made some sense.
According to the article, your answer to question 2 is two whole months.
Great post and test - however, I have 2 points
I agree with both points, though I'm able to see the real examples, so can have more faith in them. Hopefully Danny's additions to the post can help alleviate your concerns about point #1. On #2 - I hope this was made clear right at the beginning of the post. Nofollow sculpting is quite low on the SEO totem pole of activities.
I wholeheartedly agree with your first point "...there is much more value in link building and creating content that is link-worthy than obsessing over search engine algorithm fluctuations...".
Kudos on a fantastic experiment and article with all the details. Talk about a link-worthy piece of content! Consider this one linked. :)
I wonder what would happen if you repeated this test? I bet the results would not be the same. Conclusion? This experiment has too many variables to be conclusive.
I wonder if you would say that the experiment is in fact conclusive if the results were the same on a second test! =)
There will always be something to point out and wonder about. This experiment shows that it is possible to work, though it may not mean that will always work.
It's a matter of trying out for yourself and get the results in your case. It could work, as it did for Danny, or it might fail. There's only one way to know.
Confirming the results with a second, identical test will not prove the claims about page rank. It will prove that the test methodology is insensitive to everything that was different between the two tests, or Lady Chance is having fun watching us labor.
It is easy to say "no test is perfect". It is also easy to say "we will use inference". Neither gets us closer to being able to say page rank sculpting works.
Without being too harsh (as I appreciate the effort this test would have taken) inferring any kind of signal from such a small result set over such a small date range is close to reckless in my opinion. Although efforts were made to reduce noise in the set of tests, there are hundreds of external factors (ranking and otherwise) that you would have had absolutely no control over and would cause considerable "noise" to the result set
The fact no attempt was made to filter this noise on the result set is also alarming. A simple process behaviour / control chart (or similar) if applied to the data above would have made any signals (if they existed) more apparent. My guess from looking quickly at the standard deviation of the 8 tests tells me that if such analysis was done, no such signals would be found.
Thanks for sharing your findings. When I discussed this with Rand and SMX Advanced, we both were skeptical. However, the data speaks for itself. Our test following SMX wasn't as aggressive but showed the same thing: sculpting still works.
Great stuff Danny and SEOmoz, like many others in the comments here it's great to see testing like this happening here and I would welcome more (even a regular, eg bi-monthly test on a question voted on by the community? that would be awesome). This kind of testing is what makes communities like SEO Dojo so appealing.
Like Whitespark and SEOmofo above, I'm a little unclear as to exactly how this test was set up and would love to see some more details and a diagram if possible, even if a full research paper with all the details is not possible. I too am surprised at how placing links in invisible JS or iframes could be less effective at sculpting link juice than nofollowing.
A couple of thoughts on this 'nofollow PR sculpting' thing:
Thanks again Danny, look forward to more info / more tests!
(edited to sort out paragraph breaks)
I think Matt Cutts did not mean that the nofollow is a futile effort for page sculpting. I understood Matt Cutts to mean that every page will will now pass PR to all links equally on that page. If there are 10 links, then each link will get 10% of the link juice regardless if the link is ascribed as nofollow. The nofollow links will waste the given PR. This is a stark contrast from the past when nofollow links wouldn't absorb any PR.
My suggestion to clients is to only use a nofollow tag on pages that you absolutely don't want Google to crawl. I also suggest to consolidate links to avoid passing PR to dud pages like the about us page.
I wanted to thank Danny for his observations and I would of liked to see how PR played a role in the 40 domains.
I've been waiting for such a test from SEOMoz since Matt's announcement about nofollows.
Keep up the great work!
Great post Danny,
Can you share a little bit of information on how this played out across the 8 parallel scenarios (as opposed to just the averages)?
Hope you guys had a good Christmas
Done :-)
Cheers Danny.
Great work Danny - and that must of been a lot of hard work going into setting up this project! Any thoughts on whether you'll be continuing running these tests? ie seeing how the searchengines treat them in say 3 months?
I must admit as inhouse we often simply dont have time to build decent research models like this, so props for doing this for the SEO community at large - as I dont see many others doing this...
I had a gut feeling this was the case, which is why I did not bother to remove the rel=nofollow tags for existing clients and still include it for new clients. Could not have summed it up better than the first sentence under The Bottom Line.
I didn't remove them either.
How can you say PR sculpting "works" when PR itself can not be measured?
Another HUGE difference is that every site is not equal when it comes to PR, which drastically skews this test.
Dear incrediblehelp, Virtually NOTHING in the Universe can be measured with perfect precision, and not everything can be measured directly. Science doesn't require perfect precision to be valid: A man steps onto his perfectly-calibrated bathroom scale, which (directly) measures his weight as 300 pounds. The man is troubled by this reading, since it would imply he's morbidly obese, so he decides to seek out a higher-quality scale...just to be sure he's really that heavy. The man goes to his doctor's office, where one of the nurses measures his weight on her perfectly-calibrated medical-grade scale, which measures his weight as 300.4 pounds. The man is still in fatty denial, so he seeks out yet a 3rd scale. The man goes to a high-tech laboratory and weighs himself on a perfectly-calibrated ultra-sensitive scale, which reads his weight as 300.4265 pounds. The man realizes that his weight can theoretically be measured to an infinite number of decimal places...but it wouldn't change the fact that he'll probably die prematurely from heart disease. Instead of criticizing the various scale manufacturers, the man goes jogging and stops bothering people. Science doesn't require direct measurements: After our flabby friend returns from his jog, he goes to the park to play on the teeter-totter. The man sits on one end and waits for someone to come along and join him. Pretty soon, a comparably-fat woman walks over and sits down on the other end of the teeter-totter, which raises the fat man into the air. This suggests that the woman is heavier than the man. The fat man is so excited to meet someone fatter than himself, that he takes the woman to another teeter-totter, to make sure the first measurement wasn't a fluke. Again, the woman sits down and raises the man into the air. The fat man then embarks on a 5-day tour of the country, stopping at every teeter-totter he can find. He tries a variety of teeter-totters--all different styles, brands, construction materials, etc.--and every time...the woman's end sinks to the ground. Just to be extra sure, the man and woman wear the same exact outfit, sit the same distance from the end of the teeter-totters, and sit in the same position. They try to neutralize every irrelevant variable they can think of, in order to compare only their body weight. Still, the outcome is the same every time. The man is finally convinced that the woman is heavier than he is, despite the fact that he doesn't know how much she weighs. The man and woman are then accidentally shot and killed by elephant poachers. I hope these stories have helped clear up your misconceptions about the measurement of PageRank and its effect on the validity of related experimentation. Sincerely, SEO Mofo
Wow.
of course PageRank can be measured perfectly, it's an algorithm that follows order and logic you twit...
@SEOmoz staff, Please don't delete this guy's comment. I want to respond to it, and I personally love being called names, so it hasn't lessened my current state of warm-and-fuzziness.
Dear seopform, Thank you for your input. As the World's Greatest SEO, I know how important it is to open myself up to criticism from the SEO community. It is this criticism that fuels my passion for excellence and pushes me to be a better SEO, despite the fact that I am already the best. Regarding your statements about PageRank, I offer you the following information, which strongly suggests that PageRank cannot be measured with perfect precision. This is an excerpt from The PageRank Citation Ranking: Bringing Order to the Web, by Larry Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. [Emphasis is mine.] 2.7 Dangling Links One issue with this model is dangling links. Dangling links are simply links that point to any page with no outgoing links. They affect the model because it is not clear where their weight should be distributed, and there are a large number of them. Often these dangling links are simply pages that we have not downloaded yet, since it is hard to sample the entire web (in our 24 million pages currently downloaded, we have 51 million URLs not downloaded yet, and hence dangling). Because dangling links do not affect the ranking of any other page directly, we simply remove them from the system until all the PageRanks are calculated. After all the PageRanks are calculated, they can be added back in, without affecting things significantly. Notice the normalization of the other links on the same page as a link which was removed will change slightly, but this should not have a large effect. 4 Convergence Properties As can be seen from the graph in Figure 4 PageRank on a large 322 million link database converges to a reasonable tolerance in roughly 52 iterations. The convergence on half the data takes roughly 45 iterations. This graph suggests that PageRank will scale very well even for extremely large collections as the scaling factor is roughly linear in log n. If you have any further questions about SEO or PageRank, I'd be happy to answer them for you. Happy Holidays, SEO Mofo
I see no reason to bring iterations into the conversation. That, my friend, is a bridge too far.
You have to chose, World's Greatest SEO, or World's Smartest SEO. You can only hold one given title at any given time.
For instance, mine currently is:
33rd Degree Search Engine Zen Master of the Emerald Order
Do I know what that means? No. Does it sound more impressive than World's Greatest SEO? Yes.
Why? Because it creates mystique, and makes it sound like I belong in a Dan Brown novel. I don't like Dan Brown, but evidently other people do.
So pick one title, and quit writing such intelligent responses.
*crumples up his 5th Degree Search Engine Hen Master of the Peridot Order diploma and throws it in the trash*
Darren,
Apparently hell froze over. For the first time ever, I am in complete agreement with you on something.
Your examples were spot-on, and seopform apparently comes from the school of hard heads. Of course, you knew this already.
Apparently hell froze over. For the first time ever, I am in complete agreement with you on something. So...I AM the World's Greatest! W00T!
congratulations, you've proven nothing except the fact that the authors concluded standard deviation was necessary to approximate PR in some circumstances - and you've done an exemplory job of citing a paper which carries about as much weight as trying to nail jello to a tree
if you'd like to start quoting Google documents perhaps we could start running through how many times how many G employees have stated how far removed PageRank is from its original inception which you seem to put so much trust in?
@seopform
Specifically, with regard to the so-called "dangling links", why would you include data that doesn't significantly affect the overall data model?
I think what Slatten is getting at is this: So long as the results Google gets don't deviate too far from the curve on a scatter plot, they don't care.
While the margin of error may get smaller over time, there will always be some information that Google cannot assign a value to.
Therefore, Google cannot be precise with regard to the value of PR, because even Google doesn't know how to assign relevance to everything (yet).
As to Google employees, who says they aren't just messing with our heads?
I would bet the core of the PR algorithm hasn't changed all that much from what Brin and the boys originally dreamed up.
Outstanding work, seopform. If I may, I'd like to recap the argument up to this point, for the sake of any newcomers: seopform: PageRank CAN be measured perfectly. It's an algorithm that follows order and logic. SEOmofo: It says here in the original PageRank paper that PageRank CANNOT be measured perfectly. seopform: The only thing you've proven is that the people who invented PageRank have clearly stated that PageRank CANNOT be measured perfectly. But that doesn't mean anything, because that paper is from like...10 years ago. That's plenty of time for the Google engineers to invent something that makes linear algebra, vectors, probability distributions, link graphs, long division, and the number 1...like...totally obsolete. In fact, I'm willing to bet that PageRank isn't even used on the Web anymore; it probably calculates the probability that Larry Page needs to take a shower. SEOmofo: You're too powerful, seopform, and I cannot defeat you. I am undone. The End
@seopform congratulations, you've proven nothing
I gotta admit - I read this post a few times and still walked away with too many questions. I didn't feel as secure with the answer as some others.
The PageRank algorithm (which I think is very fluid) is part of the big picture but so many other things are at play... just by the slightest differences of the test pages and any one of the other 200 unknown factors. Who knows - character count? Relations to words that Google may be measuring that you don't realize? Any given tweak that leans to one test site's context over other context? I think it's a great experiment, and I'm sure all the controls were as contained as possible.
I think we should take all this as "some evidence that PageRank sculpting may still exist to some degree within certain situations". The same way we take all the unknowns. But then, I'm less of a science guy and more of an art guy anyway :)
"I'm not saying 'chaos theory,' but I sort of am."
Hi there incrediblehelp!
You're right, it's really hard to measure PR; however, Danny showed absolute rankings in his results, not PR values for those pages.
As we know, PR is not that relevant for SERP rankings - the PR itself is not what is being tested, but what "PR sculpting" can do for you.
The second point it's valid: it's hard to really endorse that PR sculpting undoubtedly works with such a small difference between all experiments.
If I recall what I was reading when people began to become skeptical about PR Sculpting, it wasn't that Google said that it wouldn't work but more that they said that nofollow links would still drain some pagerank from the linking page without passing that juice onto the linked to page. Am I remembering this correctly or just making stuff up in my memory?
I think the best thing todo to have good page rank in the future 2010 is updating your site and a good traffic.
But there is also posiblilites that page rank will not too much effective since google are using there caffeine. Caffeine prioritized the bookmarked site than the other so it means that networking is the key to get bookmarks and second is the organic search. In my opinion networking your site even you are linking no follow can help you ranked in the serps since many user visited your site.
Note: This is just my opinion
Have you ever tried sculpting using internal 302 redirects? According to Google 302 redirects do not transfer PR.
The point of PR sculpting is to prevent PageRank from flowing to pages that users need to be able to access. For example, an ecommerce order form: you want your visitors to be able to order your product from any page on your site, but you don't want Google to pass PageRank to your order form page...because it doesn't contain any real content and it's not suitable as a landing page. Sure, you can use 302 redirects...but that wouldn't accomplish anything. Let's say you set up your order form like this: /order.php --> 302 redirect --> /order-form.php True, the order-form.php page would not have any PageRank flowing to it...but now you would have /order.php collecting PageRank and Google would associate that URL with the content on /order-form.php. All you've done is change the URL that's wasting your PageRank; you haven't prevented it from being wasted.
Well, you tell Google order.php is temporarily replaced by order_form.php. a 301 redirect would transfer the PR to order_form.php but the 302 will prevent Google from doing so. I am not sure if order.php would keep the PR. I came across that question because one of the real estate SEO heavyweights Trulia is doing it big time on their property result list. E.g: https://www.trulia.com/NY/New_York/ Click on any property detail page and you will see the link you clicked on being 302 redirected to the property detail page with a different URL. I wonder why they do that?
I have no idea why Trulia is doing that, but I highly doubt it's an intentional SEO strategy. I looked into it a bit, and this is what I found: Individual property links (located on the results list pages, such as the one you mentioned) are in the form: /tfl/p=[$propertyID]&u=/property/[$propertyID](arbitrary-crap) (In case you're wondering, I'm just guessing that "p" stands for "property," and I'm using the [$string] syntax to represent server-side variables. "Arbitrary crap" could be anything.) After you request that URL from Trulia's server, it performs a couple of processes that roughly follow this logic: 1. Parse URL 2. If URL contains /tfl/, then 302 redirect to /property/[$propertyID](arbitrary-crap) 3. Look up database record that corresponds to [$propertyID] 4. Build individual property page from database record This is actually poor programming, as far as I can tell. Their CMS is building individual property links from the [$address] field, but that entire portion of the URL is ignored when responding to a request (hence, it becomes "arbitrary crap" at that point). This creates the possibility of infinite duplicate content, because the server accepts ANYTHING, as long as it contains the [$propertyID]. For example, one of the links on the NY page is: https://www.trulia.com/tfl/p=1094416961&u=/property/1094416961-2110-Frederick-Douglass-Blvd-6C-New-York-NY-10026 But the server will return the same content at this URL (which I just made up, using Google's corporate office address): https://www.trulia.com/tfl/p=999999999999999999&u=/property/1094416961-1600-Amphitheatre-Parkway-Mountain-View-CA-94043 Then they compounded the problem even more, by disallowing the /tfl/ directory in their robots.txt file. Unfortunately, that doesn't stop Google from distributing PageRank to the /tfl/ URLs (most people don't realize this). So in the end, the 302 redirects don't have any effect on SEO, because Google isn't allowed to request those URLs (therefore, Google doesn't even know they redirect at all.) The result of this poor programming is that Trulia has wasted PageRank on a whole lot of pages that Google can't crawl. Bottom line: Don't do what Trulia's doing. That's not the reason why they rank well.
This is great news for spammers. Mass header spam relevant resources with different anchors, deep linking, and internal linking and there you have it a fastfood pr sculpting.
First of all, congratulations for such a deep research!
As folks said already, it's really convincing, be it because of the way you wrote it, or due to the results itself. It's really awesome!
Also, since the very day Matt Cutts said nofollow didn't worked "that" way, almost every SEO said "if it's working, let it be", so maybe it's just this: don't overlook PageRank Sculpting, but if it is already working for you, keep up the good work!
Congratulations once again! Amazing findings.
1) Did you notice any significant ranking differences due to personalized search?
Were there, over two months' time, any ranking changes that could affect the average ranking in any negative way?
2) What about the competitivenes of the targeted keyword? If I get it right, at the final stage you got a listing of just five pages (the keyword was unique), right? Wouldn't it be better to choose a keyword with low competitiveness (let's say, <10.000 results) to see not only the rankings of 5 tested pages but also the difference in rankings compared to other pages not used in the experiment?
Great points,
With regard to the first one, these tests took place entirely before personalized search was implemented. There was no cross-over.
With regard to your second point: The competitiveness of the targeted keyword phrase was only the other experimental websites. We chose keyword phrases that were completely unique to the entire Internet. (Ex. Horsey Cow Tipper). We discussed this as a possible problem for awhile in house before deciding its pros out weighed its cons.
Each method targeted the same unique phrase (oxymoron?) and the 8 different trials were each isolated with a different keyword phrase.
Thanks for a detailed explanation. That makes sense :)
At one point I was going to write a blog entry about this but my focus shifted to other pressing matters.
I never did buy into the end of the world theories in reguard nofollow links and googles announcement.
just from a point of logic PR sculting with nofollow links would still work, just not as much as before in theory. But since goolge made this change long ago obviously it still works well enough to use.
If you had 10 points to pass on through 10 links then nofollowed 5 of them, under the old way of thinking each followed link would get 2 points of link juice. Now as we understand it we pass 1 point and the nofollowed links dump their link juice.
So if PR sculpting worked before google made their announcement, unless they change things in their algorithm it should work the same after, we just understand it better now.
Excellent information. Appreciate that you tested this. I did not remove no follow because I'm by nature cautious.
Thanks SEOmoz for doing this kind of text. I think SEO should be more about facts and not about believes.
One another great example of this kind of scientific testing is the Flash test, where they analyzed how well Google finds links on a swf -files
Anyway, I hope you don't mind that I blogged in Finnish the results of this experiement. I linked back to this article and gave credit to you :)
My thoughts in Finnish about this experiment.
The attribute is something I use on a few links and will continue to use until there's significant evidence that there's no point. Why are Google getting so controlling with something like this? How else will be be able to control what pages have a high PR, which ones don't and where we want to spread our PR to?
I know you spoke about using iframes and javascript but with no result, and then a user commented about the javascript blocking the link entirely but this seems like a lot of hassle for something as small as "PR Sculpting".
By no means am I saying "PR Sculpting" is a small task. Merely stating that playing around with javascript, iframes and so on and so forth seems like overkill when continuing to use the rel=nofollow command is so useful.
I think the best way to indicate which pages are important (i.e., which pages deserve the most internal PR) is by choosing how you link to them, and by that I don't mean how you code your links. I mean that if a page is less important -- to you and to your users -- then link to it less, and link to it from less important pages. If the page is more important, then link to it more, and from more important pages. Information architecture, people. That's what PR was designed to measure in the first place, remember?
If experimentation suggests to you that putting nofollow on your links works and you want to do that instead of just linking to show which pages are important, that's your prerogative. Keep in mind that rel=nofollow was created by the search engines, and not for this purpose.
No matter how strongly you believe in your experiment, you don't know that it's going to work next month, or the month after that.
We know that linking to a page is a way to say it matters, and there's absolutely no reason to think that's going to change.
If Matt Cutts were to whisper in my ear, "Don't tell anybody, but nofollow works for internal PR sculpting, and I'm 100% certain it always will" (which I admit isn't very likely, as we've never met) I still wouldn't see any point in using it.
No matter how strongly you believe in your experiment, you don't know that it's going to work next month, or the month after that. This is precisely the reason why I no longer use the attribute. Matt Cutts' announcement about it made me realize: I don't care if it works or not--it can never be trusted 100%, and I don't want to waste my time thinking about it or wondering how Google will handle it in the future.
I pretty much agree with this conclusion Darren - I used to obsess over nofollow PR sculpting, then after all the hoo-hah with Matt Cutts' announcement I realised I'm spending way too much time on this - nowadays I'm just sticking to 'noindex' tags on those 'Privacy policy' etc pages, let the juice flow through em.
Interesting test but no everyone has the time or resources to do such detailed testing, but it is just more good news for spammers who will go back to old habits.
Now we hope Matt Cutts will clarify the point! Thanks for sharing this excellent experiment.
Our sites have attribute based search system and it automatically creates millions of worthless pages for search engines.
I am using nofollow on all these sites actively - if it is even an indication to SE not to crawl the page, it is worth it.
Hi Danny,
You have made a significant research performing effective experiments. Thanks for sharing the results here. I believe working our own way with our website and not changing them for Google can help far better.
The comments following the post was also very good and informative.
So basically you're saying that matt cutts is full of s@&t? Nice work!
I don't think I'd put it that way at all. What I'd say is that Google may indeed be treating nofollow in the ways Matt described for some algorithmic elements, but it apparently doesn't change the overall impact it has on rankings as a whole. There's so much nuance in Google's ranking system that I'd be very cautious of using a test like this, even with relatively convincing results, to show that an official Google rep actively lied to the search marketing community. I certainly don't believe that personally.
Great test and subsequent posting, Danny!
Nice to see that sculpting still works! will tell others about this one too!:-)Jim
Very interesting and I love the scientific approach.
I had been asking for more explanation of nofollow on internal link structures on a Q&A a few months back, and here is a really good study. Love it.
The research is highly appreciated. As you invited others to comment on this, please see my comments below. I do realize that it is so much easier to comment on others than to conduct the research yourself. That said,
It is clear that within the 8 testgroups, the "nofollow" method has the best score on average (2.4). It is also clearly separated from the average score of the other methods (3.0 - 3.2). What really puzzles me that within test 6 the "nofollow" method is the worst ranking. Also, to a lesser degree, it puzzles me that the "control" method, which has the worst score on average, is in fact the winner in test 3 and test 8. So 3 out of 8 individual test results are quite different from the average result for the 8 tests combined.
So based on this test, I inclined to assume that the nofollow method may cause better rankings on average, but clearly not in every individual case.
That got me thinking, how can the 3 odd test results be explained. Now you have gone to great lengths to keep all other factors equal and you have really indepth knowledge on SEO. This rules out a lot of obvious explanations. So how do you interpret the 3 odd results?
(to me, the most obvious explanation would be randomization of results on Googles part or some sort of background noise, but this is not satisfactory as an explanation, and it also leads to the question how much of the rankings is the result of randomization or background noise in the first place).
I think that PageRank Sculpting make sense only on really big sites, more than 100'000 pages. 5 or 6 pages on one domain is not enouhgt to check is it work or not.
Remember that in case like this we are scaulping in basic: 0,15 PageRank or lille bit more.
Try to predict impact of PR sculpt on two sites:
0,15 * 6 pages = ?
0,15 * 100'000 pages = ?!
PR Scuplting on such small site is like sculpting in tiny piece of rock.
Copule weeks ago i did SEO Audit on medium Wordpress blog (300-400 pages) and I focus on Page Rank Scuplting (Wordpress SEO Audit). Until now I can't see any impact of PR Scuplting on rankings - site is too small.
By the way i discovered a new way of PageRank Scuplting, my post is waiting in YOUmoz section. Please one of SEOmoz'ers to check it and publish, it's a good moment.