As an SEO agency, Virante has always prided itself in having research-based answers to the questions presented by our clients. A year or so ago, I caught myself referring to the a site as having "a great looking natural link profile" without really having an numbers or analysis to describe exactly what that profile should look like. Sure, I could point out a spam link or two, or what looked like a paid link, but could we computationally analyze a backlink profile to determine how "natural" it was?
We dove into this question several months ago while trying to identify automated methods to identify link spam and link graph manipulation. This served dual purposes - we wanted to make sure our clients were conforming to an ideal link model to prevent penalties and, at the same time, wanted to be able to determine the extent to which competitors were scamming their way to SEO success.
Building the Ideal Link Model
The solution was quite simple, actually. We used Wikipedia's natural link profile to create an expected, ideal link data set and then created tools to compare the Wikipedia data to individual websites...
- Select 500+ random Wikipedia articles
- Request the top 10,000 links from Open Site Explorer for each Wikipedia article
- Spider and Index each of those backlink pages
- Build tools to analyze each backlink on individual metrics
Once the data was acquired, we merely had to identify the different metrics we would like to compare against our client's and their competitors' sites and then analyze the data set accordingly. What follows are three example metrics we have used and the tools for you to analyze them yourself.
Link Proximity Analysis
Your site will be judged by the company it keeps. One of the first and most obvious characteristics to look at is what we call Link Proximity. Most paid and spam links tend to be lumped together on a page such as 20 backlinks stuffed into a blog comment or a sponsored link list in the sidebar. Thus, if we can create an expected ideal link proximity from Wikipedia's link profile, we can compare it with any site to identify likely link manipulation.
The first step in this process was to create the ideal link proximity graph. Using the Wikipedia backlink dataset, we determined how many OTHER links occurred within 300 characters before or after that Wikipedia link on the page. If no other links were found, we recorded a 1. If one other link was found, we recorded a 2. So on and so forth. We determined that about 40% of the time, the Wikipedia link was by itself in the content. About 28% of the time there was one more link near it. The numbers continued to descend from there.
Finally, we plotted these numbers out and created a tool to compare individual websites to Wikipedia's model. Below is a graph of a known paid-link user's link proximity compared to Wikipedia's. As you will see, nearly the same percentage of their links are standalone. However, there is a spike at five proximal links for the paid link user that is substantially higher than that of Wikipedia's average.
Even though paid links only represent a ~25% proportion of their link profile, we were able to detect this anomaly quite easily. Here is the Link Proximity Analysis tool so that you can analyze your own site.
White Hat Takeaway: If you are relying on link methods that place your link in a list of others (paid, spam, blog-rolls, etc.), your links can be easily identified. While I can't speak for Google, if I were writing the algorithm, I would stop passing value from any 5+ proximal links more than one standard deviation above the mean. Go ahead and use the tool to determine if your site looks suspicious. Run the tool on your site and make sure that you are within about 18% of Wikipedia's pages for 4+ proximal links.
Source Link Depth Analysis
The goal of Paid Links is to boost link juice. The almighty PageRank continues to be the primary metric which link buyers use to determine the cost of a link. Who buys a PR0 link these days? It just so happens that PageRank tends to be highest on the homepage of sites, so most Paid Links also tend to come from the homepage. This is another straightforward method for finding link graph manipulation - just determine what percentage of the links come from homepages vs. internal pages.
Once again, we began by looking at the top 10,000 backlinks for each 500 random Wikipedia pages. We then tallied the number of folders deep for each link acquired. For example, a link from https://www.cnn.com would score a 1. From https://www.cnn.com/politics would score a 2. We created a graph of the percentage at which each of these occurred and then created a tool to compare this ideal model to that of individual websites.
Below is an example of a known paid-link user's site.
As you can see, 79% of their top links come from the homepages of websites, compared to Wikipedia's articles with average around 30%. SEOmoz, on the other hand, receives only 40% of its links from homepages, well within the standard deviation, and Virante receives 29%. Here is the Source Link Depth Analysis tool so that you can compare your site to Wikipedia's.
White Hat Takeaway: If your link strategy involves getting links primarily from the homepages of websites, the pattern will be easily discernible. Run the tool and determine whether you are safely within 15% of Wikipedia's pages in terms of homepage links.
Domain Links per Page Analysis
Yet another characteristic we wanted to look at was the number of links per page pointing to the same domain. Certain types of link manipulation like regular press releases, article syndication, or blog reviews tend to build links two and three at a time, all pointing to the same domain. A syndicated article might link to the homepage and two product pages, for example. Our goal was to compare the expected number of links to Wikipedia pages from a linking page to the actual number of links to a particular website, looking for patterns and outliers along the way.
We began again with the same Wikipedia dataset, this time counting the number of links to Wikipedia from each linking page. We tallied up these occurrences and created an expected curve. Finally, we created a tool to compare this curve against that of individual sites.
The example below is a site that heavily relied on paid blog reviews. As you will see, there is a sharp spike in links from pages with three inbound links to the domain.
Caveat: Chances are when you run this tool you will see a spike at position #1. It is worth pointing out that the majority of website homepages tend to fall in this category. When you run this tool, as with the others, you should probably take a second to look at your competitors as well. Is your site closer to Wikipedia's model than your competitors? That is the question you should be asking first.
White Hat Takeaway: Is your link strategy creating patterns in domain links per page? A natural link graph will have great variation in this. Moreover, it is not uncommon for authoritative sites to have 10+ links to pages from sites. This should be expected - if your site is the authority, it would make sense for it to be cited several times on a thorough page about your subject matter. Here is the Multiple Links Analysis tool to compare your site to Wikipedia's.
What to Do?
First things first, take every result you get with a grain of salt. We have no reason to believe that Google is using Wikipedia's backlink profile to model what is and is not acceptable, nor do we pretend to believe that Google is using these metrics. More importantly, just because your site diverges in one way or another from these models does not mean that you are actually trying to manipulate the link graph. If anything, it demonstrates the following...
- If you are manipulating the link graph, it is pretty easy to see it. If Virante can see it, so can Google.
- If you are still ranking despite this manipulation, it is probably because Google hasn't caught up with you yet, or you have enough natural links to rank despite those that have been devalued.
So, what should you do with these results? If you are using a third party SEO company to acquire links, take a hard look at what they have done and whether it appears to differ greatly from what a natural link profile might look like. Better yet, run the tool on your competitors as well to see how far off you are compared to them. You don't have to be the best on the Internet, just the best on your Keyword.
Tool Links One More Time:
Amazing research Russ. If webmasters can see just how easy it is to spot spammy link building techniques, perhaps they will strive for cleaner, white hat methods.
One more use for your tools could be for evaluating link prospects. If a potential linking website's profile looks unnatural, you might think twice before reaching out for a link.
;-) Exactly!
I agree, but sometimes it's freaking hard to get the webmaster of a large site to see it. In most cases, the bad things must happen first... It's the human nature (and the stubborn one) :)
Yup - if they are making good money, and it's always worked in the past, why change things up now!?
heheh
Another feature I'd love to see added to this already amazing set of tools is something that looks at anchor text distribution. If it could plot a pie chart showing all the differing anchor texts of links pointing to a page and their share vs. the wikipedia average that might be interesting? Potentially difficult / impossible though - I'm not sure how thet API works?
My compliments Russ for the great idea.
Just one note about the Wikipedia model: am I right thinking that you take notice of the classic standard Wikipedia page, that usually put the biggest part of the outbound links at the end of its articles (in the "notes" and "external links" section, like it visible here: https://en.wikipedia.org/wiki/Search_engine_optimization) and, question: how that content modelling is influencing the metric of the tools?
I put the link proximity analysis tool in my "SEO Tools" favs carpet. Very useful for audits and competitive analysis.
Yeah, it is my favorite of the bunch too as it catches a lot of common styles of shady links.
Good question - certainly the citations section at the end of a post would fall into the high-proximal link category. However, it is important that we look at things in the aggregate. It is perfectly fine to have a handful (in fact it is expected) of high proximal links, the problem is if it makes up a disproportionate quantity of links in your profile. This works both ways, though. It would be similarly strange if 100% of your links were always standalone.
Ok, now it is clearer... normality is in the middle... somehow as it happens for anchor texts in the inbound links profiles.
I think this is where using Wikipedia as a control has its weakness. If you look at the "Domain Links per Page Analysis" graph you see that Wikipedia has a huge spike to 40% at >10 domain links per page. This is most likely a result of multiple links to Wikipedia pages in Reference sections.
Russ, these tools are brilliant at visualising the spamtasticness of my competitors or indeed the client SEO legacy. I'm tempted to anonymise the graphs into a twitpic hashtag league table of spikeness deviations because my sector would beat you all by a mile :) If there were a total deviation score averaged across the 3 reports...
Yeah, I need to add that in. I just found a simple donald knuth inspired php stddev function that is begging to be added to these tools.
"If you are manipulating the link graph, it is pretty easy to see it. If Virante can see it, so can Google."
Thanks for taking out the time to do this reasearch and come up with data and tools. To be honest i am very skeptical about your wikipedia model theory and i also think that detecting link manipulation through tools/algorithim is not as easy as you think it is. A link doesn't have a trail of transactions that Google can just go ahead and trace to indentify it as a paid link. This makes detecting paid links algorithmically extremely difficult.
I've actually utilized some of your blog advice...
Thanks!!!
great post and amazing points! I think that Google doesn't use Wikipedia's backlink profile, but nevertheless, if someone can be considered to have a natural link profile that is definitelly Wikipedia. Now just to try out those tools :)
I agree completely. I do not believe that Google uses Wikipedia's backlink profile, however we do know more likely than not that their link profile is completely natural. If Google is trying to favor natural link profiles, then it would make sense for us to try to model our link building campaigns upon known natural link profiles - hence, wikipedia. It is a round-about way to get at it, but to my knowledge Google hasn't diagrammed out for us what a natural link profile should look like.
p.p1 {margin: 0.0px 0.0px 10.0px 0.0px; font: 10.0px Verdana}
Well put. Think about this... I have over 1,000 random wordpress blogs up for affiliate earnings purposes and I have been noticing a strange comment spam trend. Several comments (that seem to be from automated services like Scrapebox and Xrumer) use random strings of anchor text and link to 'trusted' sources like Wikipedia. Theoretically if Google started to see these types of links more and more often across several domains, it could skew the 'natural link profile' to favor traditional spammy and bought links. I agree with the premise that Google has to have some sort of gold standard in determining the optimal link profile, but at this point I think the strategy mentioned above could actually work over time in shaking their cage. My .02.
I've noticed the same type of comment. Obviously spam but linking to facebook.com and bing.com which made no sense.
My thought was that they were testing the waters to see if you would allow a comment with a link to a harmless/general site and then if you approved they would link to the site they want to drive traffic to.
This is exactly what they are doing - it is part of the harvesting process. Start with a list of targets, send out spam with trusted sites in them (bing, facebook), check each page to see if spam went through, add successes to list for future spamming.
Spammers aren't stupid, and most of them aren't lazy either.
*applause*
Russ: fantastic post mate, I will be playing with those tools later today in some detail. I especially like the link proximity metric - lets get rid of them dang pesky "sponsor boxes" once and for all ;)
See you in Seattle!
Martin MacDonald
Probably one the most informative posts I have come across. I will be playing with the tools this weekend! Thanks for sharing
Other attributes of a link profile that would be interesting to model:
1. % of links that are no-follow
2. link distribution by type of source (social site, blog, forum, tweet, other site, ...)
3. link distribution by anchor text (exact-match of ranking phrase, variation of exact match of ranking phrase, brand, variation of brand, url, junk link "click here", "visit", etc)
Nice research. Thanks!
I was also thinking about the no-follow links and social websites and forums. A great research nonetheless.
Russ, extremely interesting approach to flag sketchy link profiles. And very generous of you to share the tools with us for free. Makes your approach so much more actionable and usable.
Forwarded to everyone in the office.
Tip of the hat!
Thomas
Thanks! The development guys certainly appreciate the kudos as well - they spend a lot of time making this stuff possible.
What an inpired piece of genius this post is. Love the tools, and will definately be using them!
Really great and useful tool Russ! Is the idea of having two input boxes to compare two sites at once (and omit the Wikipedia profile) possible?
It would be possible, although probably the best bet would be to keep the wikipedia model in there too :-) maybe that will be our next iteration of the tool.
Interesting approach, thanks for sharing some tools with us too :)
Great Article! How long did it take you to put all of that data together? It looks like a LOT of work!!!
A few hours to build the modeling tool. Luckily we have an existing platform developed by Jeff Gula, one of our devs, for deploying rapid web crawlers, so collecting the data on all of wikipedia's links was actually quite easy.
We grab the data from the SEOMoz site intelligence API which, similarly, we have have pre-built functions for. The hardest part was simply getting to look half-way decent :-)
Very interesting theory. I will have to do some comparisons in our own field to see how we compare with our top competitors.
Attention SEOmoz... Maybe a new tool :)
Great post and awesome tools. While these methods can show link manipulation, it still can't show Google who's doing the link manipulation. Meaning it's impossible to take action on a site for using "paid links" if a) Google can't prove that money exchanged hands or b) Google can't prove that the website in question built the links themselves. Just a friendly reminder that Google is not going to penalize you for doing manipulative linking. They will and should, however, discount or ignore these types of links.
Also, I find it funny that you say "Certain types of link manipulation like regular press releases, article syndication, or blog reviews tend to build links two and three at a time, all pointing to the same domain." and this post has three links to your site in it :)
Ironic, eh. I think that it is worth pointing out that Google needn't prove a link is paid before choosing to devalue it. IMHO, Google has numerous metrics determining the trustworthiness of a link (or an entire link profile). Google might not penalize you for easily detected paid links, but they can certainly find ways to ignore them.
Unbelievable research. Great anaylisis and great tools. Thanks!
Im glad this got upgraded to the main blog, I read it yesterday and sent it around to all our SEO guys at work to use. Really good tools and analysis.
Perfect man. Thanks for the share. Would your firm/cmpany be okay with sharing the set of wikipedia URLs you took for creating a baseline? Arey they related to one niche or covers all niches or are they random?
A little embarrassed by this, but we actually md5'd the URLs after they were randomly chosen. The method we used those was the special:random link off the home page of Wikipedia, so they are as unique as that command is unique.
Gotcha. Thanks for replying and thanks for your amazing tool. It is kind of slow, but totally worth the wait.
I think that your comment about analysing your competitors first is better than assuming you should be close to Wikipedia. As a guy who builds websites for clients, for example, a link to our website is nearly always on the home page (in the footer, to not interrupt user experience of course) - and this is something that is likely to be the same for many other web designers. So I would compare mine to my competitors every time.
Thanks for these tools though, really useful and thanks for the time put into the research!
In a word - awesome. This is the kind of analytics and insight that I wish I had the creative vision, tools and time to produce.
One day maybe, once I've tackled the task of answering all my emails :)
Great post. I'd just like to point out that unfortunately Google isn't quite as advanced as you guys.
link-spam.png
link-spam.png
Above are two images from one of my competitors. The URL in question has dominated every local search for all major cities on a national level for 8 months.
I believe SEOs often over estimate Google's capability. I also believe that a large chunk of the posts on SEOmoz are about 2 years ahead of where Google is at.
I would take that chart with a grain of salt - if you private message me the domains, I can take another look. I am suspicious because both of those are 100% spikes, which means unless on some super-strange set of circumstances all of their links are exactly the same on these metrics, then for some reason the tool only analyzed 1 back link of your competitors'.
Great post, Russ. This is another case were algo's intelligence is a bit inferior to homo sapiens. Some sites put links at the bottom of the page together because for THEIR site it creates a better user experience. They may do so as small lists that is different from page to page, but not in the body text. It's another case of DON'T do what's best for your users, do what's best for Googe. "Best for your users" is a silly idea when users are people and Google doesn't have users visit a site.
Hi Russ, do these 3 Virante tools check one's entire site, or just the inputed URL? I'm asking because a couple days ago we had a URL redesign on our site and I get very difference reports if I run the URL www.domain.com vs www.domain.com/index.html.
I'm assuming the later URL will simply pull that given URL's stats since LinkScape treats is as not a top-level page, and the data won't be for the entire site.
If you could add functionality to display the total # of pages crawled for the given query, it would help to diagnose these types of problems.
Or you could just let us know :)
Thanks,
M
We use the SEOMoz Site Intelligence API with the Links method and the the page_to_page Scope to get the top 1000 links pointing to that page. We then spider all of those pages to determine the various metrics (link proximity, multiple links to same domain from page, etc...)
Thanks for the answe Russ!
I'm not at all familiar with the API, but you're saying that the Virante tools compare the link data from only ONE of our pages to the average calculated from 500 random Wikipedia pages, correct?
The best comparison would obviously be to compare the average of 500 random pages of MY site to the average of 500 pages form Wikipedia, but that would require us to re-invent your method and apply it to our own sites, assuming of course that you provide us with all the datasets for Wikipedia to keep the comparisons the same between all users.
Right now I'm only using the tools to spot-check the most important pages of my site against Wikipedia, but it'd be awesome to compage my entire site against the Wikipedia data. I bet if had that full data it would completely change things.
Thanks for making these tools available. Great insight here!
Big Kudos for offering these tools for free - what an excellent idea. I totally agree that using wikipedia may not be 100% relelvant, but the concept is sound. You guys deserve to be highly successful putting tools like this out there... seomoz take note: buy these guys!!
Did you hear that Rand? Just making sure.
Have trouble the depth one. Others work well. Thanks.
This is very useful information. These are the first free and publicly available tools that I have seen that attempt to identify link manipulation. I look forward to playing with it on some of my known link-buying competitors and see how it plays out!
Question: Was there much variance in the link graphs of the Wikipedia articles?
I think is is a valuable tool with the ability to give you an idea that perhaps something is wrong.
However, after running a few sites through the source link depth analysis tool, I see one small problem. Because it is using Linkscape data, there were multiple cases where I saw a site had a low depth link because a page was crawled when a blog post was running on the front page of a blog and was indexed. At this point, the link is no longer on the front page.
Everything requires further investigation, but I think this is a great addition to the SEO community and the fact that you are sharing these tools for free is a great thing! Thanks!
Hmmm. Looking more closely at an analysis of a site I know does essentially all paid links, I really think some of the metrics you are gathering here are not quite getting the job done with people who are used to manipulating the SERPs with skill.
The site I'm looking at in particular shows approximately 70% of the linking pages only having one link from a particular page.
The link proximity analysis also shows approximately 70% of the links ranking at a '1' on the scale. I think this is because they are using private blog networks for their rankings.
Regardless, I think it just shows that these are tools, and can't be used as the end all in deciding whether or not a site is actually spamming and purchasing links. Further scrutiny is always required.
I agree that Wikipedia is seems like an ideal choice for such a case study. But just wondering if you happened to look at any other "ideal" sites to see if their link profiles aligned with Wikipedia's?
I'm floored about how helpful this is! Real data, real tools, real aciton items. Thanks so much for the detailed blog post!
We have not looked at this for other "ideal" sites, however we do regularly employ a similar technique where we take the aggregate of the sites already ranking for a keyword (ie: the sites you will be competing against) and then compare your profile on these metrics to theirs. This is particularly valuable for determining the what threshold exists for various link building tactics in a particular keyword space.
Wow...when I though I couldn't process another metic....link proximity comes into play. Thanks for the tool.
I think if you are getting links from a variety of sources, and you are varying your backlinks, keeping you patterns random...you'll be alright.
Plus...doing honest, and not trying to over stuff or buy backlinks keeps you looking healthy.
Just a thought.
You are right for the most part, but chances are your efforts to naturally get links may still create a different pattern from truly-natural links.
In my experience, you're right Russ. Any attempt to improve links for the purpose of improving links... is by definition, "unnatual". That said, I try to focus on the activiites that produce editorial links, and on links for qualified traffic.
Great tools Russ, Thank you!
Hey, Russ you have a descent tool for link analysis. Really great job. However I've tested few domains that are banned by Google for link spamming and they have almost the same graph as Wikipedia.org :)
These are by no means perfect - each one indicates a separate measurement we can use to identify link graph manipulation. In particular, they do NOT look at most spam link building techniques (ie: forum spam, comment spam, etc) which are actually far easier to detect using basic footprints. And, yes, we have those too :-)
Tools seem to be broken at present. Every site entered comes up with flat-line graph, and no Wikipedia benchmark.
I am looking into this, but at present I have not seen any other reports of the tools being down. I am double checking myself.
Word of warning: it is likely these will go down at some point today due to the server being hammered. Each request gets thousands of links from SEOMoz's API and then spiders all of those sites to analyze their metrics. If we get 1000 requests today, it means will spider somewhere around 1,000,000 unique pages.
Great stuff Russ! We're having problems with the Depth Tool, looks like it works really quickly but no data loads, and the Multiple Links Tool will only give us the Flat Line Graph as well. I'm guessing it's as you say that everything is getting pummeled right now but wanted to let you know that it's not just thekohser that's having issues. Cheers and great work!! Much appreciated.
weird, can you PM me the URL you are testing? Perhaps something screwed up then got cached. I could clear the cache for you at that point.
I noticed that an incorrect/invalid URL entered into the tools creates a graph even if the URL does not exist. Be careful what you type!
Russ- This is a fantastic post. I have all 3 tools running reports for me right now. I can only imagine Virante is getting swamped with traffic. Well deserved traffic for creating such interesting tools. I anxiously await the results and look forward to running numerous sites through them.
The traffic actually hasn't been as bad as we expected (or maybe we are just better prepared than in past). Certainly the volume is there, but the servers seem to be handling it quite well.
I think your model is OK.
Here in Poland we got little "problem" with Wikipedists - they're removing all external links, even if link is valueable (cose SEO = SPAM ;/). I call them "The Dog in the Manger". I've described this problem on my blog - https://blog.shpyo.net/256/wikipedysci-to-psy-ogrodnika.html (sorry in polish).
I don't know how this "issue" looks like in other languages, but here, in PL, if you put some link - it will be deleted. You are in better position when you got official site.
I think this is very valuable in understanding how Google may be measuring the value of links both now and in the future. I also thought it very clever to use Wikipedia as an ideal model of natural link building from which to measure against.
Awesome..really I like your post..MASTER.
This post is a prime example of why Russ and Virante are leaders in enterprise SEO space. My company has been with them for roughly 3 years and has nothing but great things to say. Data driven decision makers.
Well written Russ!
Thanks for sharing these great tools.
It is amazing how far the industry has come in the last 14 years.
Great post.
It is really informative!
I'm contributor of images in Wikipedia for 5 years for now, and benefits a lot of links pointing to my sites.
I think it's a clever idea to detect spammy links and I appreciate your work in it. My idea is that you can also use the link profile of other trustes sites such as Google and CNN and find a mean of this Graph instead of only wikiperdia link profile. It will become more robust in my opinion.
Hi, This is really great....
but why not use google.com or one of its services as the ideal model. Google pretends it is now paying for links...
-Fadi
As a note guys, the tools were down for a few hours this morning. Some genius decided to automate queries against the server. I'm really not opposed to it, but we just weren't prepared. We have since doubled up on server resources so it should be able to handle the onslaught.
Thanks again,
Russ
If you're going to produce great tools, people will want to automate them to bring the data in quicker :-) Great work.
Hey decent idea, might have to test these methods out in a few niches =)
I like the nice link graph tool too...
The only thing which may throw it out is if some one has scraped content from wikipedia and then used it via a huge network of article sites, the wiki links are then built via say 1000 spam sites...
Looks very interesting, particularly the source-link depth analysis. I would imagine that having a high percentage of homepage links is a dead giveaway of link manipulation.
Really interesting approach - thanks. Even if you don't agree with every metric or Wikipedia as a foundation, this is a great model to build on. I think Google has gotten pretty savvy about link profiling - they can see patterns across the entire link graph in milliseconds.
Thanks. Yeah, I agree completely that chances are Google does NOT use these metrics specifically - these are crude and unsophisticated at best. Probably the best take away is the eye opener of seeing how crude and unsophisticated tools can catch spam pretty easily.
Great post and great algorithm development for great tools.
I never thought about how paid links tend to come from home pages.
There's a lot of great ideas here that I will definitely have to incorporate into my personal link analysis toolset.
Very well done!
Very interesting stuff. Thanks, Russ!
Hey guys, just woke up here on the east coast. I'll do my best to answer questions here throughout the day or via email. If you are a pro member you can always private message me if the question is about a specific site. Thanks for all the feedback!
Wow, this looks great - thanks for opening up that tool for us to check out!
No problem, I hope that you enjoy it!
You're a genius Russ!
This is a truly amazing read. The concept of benchmarking versus Wikipedia is inspired!
:-)
awesome tools mate, I've bookmarked all of them!
sorry mistake
u r just amazing russ
I have tried to implement this model for my site about civil engineering but it will take much time to rank in Google SERPS.
Could anybody share an example of your work?
[link removed]
I used this tool but i have some confusion, that is way i am sending you some email please check
it's glittering idea .still i have not used it but now i am going to use it
This is really great wikipedia model for link building. I am going to integrate this model for my on going link building campaign.