Howdy, Mozzers. This is Russ Jones (@rjonesx) from Virante, Inc. I recently spoke at the Search Exchange conference in Charlotte, NC on the topic of programmatic, automated SEO solutions and realized that it could probably be more valuable in front of a larger audience. Of course, the attendees have a head start, so you better get to work.
I have a confession to make. I love infomercials. In fact, I would probably call myself an infomercial elitist / hipster. I liked infomercials before they were cool; before the Billy Mays and Slap Chop Guy made their way into internet memes. I pledge my allegiance to the godfather of infomercials, Ron Popeil, while guys like Anthony Sullivan weep at his alter, asking forgiveness for their sub-par jobs as pitchmen. OK, maybe I take it a little too seriously - I do happen to have a DVR full of Gator Grip, Ginsu Knives, and Flowbees - but I believe there is something extremely motivating about this type of advertising. And Ron Popeil hit it on the head over and over again: Set It and Forget It.
This was the tag line for the Ronco Showtime Rotisserie, an amazing success for infomercials. You see, there is an innate desire for us to find solutions to common, everyday problems that do not require our attention. These nagging, annoying problems like making dinner, cleaning up, and in our industry - SEO tedium - tend to suck up our time and attention while bringing only marginal improvements.
Unfortunately, there is this perception, almost bias, against automation in our space: a misbelief that there is nothing that we can set and forget in SEO. Well, I am here today to free you from the reigns of some of your daily miseries of SEO, all for the incredible price of free.
Strategy 1: Real Time Referrer Indexing
We often joke that "Google knows everything." While we can lament the loss of privacy and liberty, there is one thing that I do want Google to know about - my links. I want them to know about as many links pointing to my site as possible. Unfortunately, Google misses out on a good portion of the web. Well, what if you could find links that Google hasn't necessarily found, and then make sure that Google does index them and count them? Introducing Real Time Referrer Indexing:
If you were go into your Google Analytics right now and export all of the pages that have sent visitors to your site since your website's inception, what percentage of them do you think will have been indexed by Google? 90%, 95%, 99%? Sure, it will probably vary from site to site, especially given how many different sites out there have sent traffic to you, but there are likely to be a handful that Google never got around to crawling. Our goal with this first set-it and forget-it tactic is to find the pages that refer traffic to your site on-the-fly and make sure if they have a link, that Google knows about it.
Ideally, our automated solution would work like this...
- The script would record every referrer from other sites.
- The script would spider that site to see if it actually has a real, followed link.
- The script would check to see if Google had cached that referring page with the followed link.
- The script would coax Google to reindex that page if it had not yet found the link.
- The script would continue to check to see if Google had cached the referring page.
This is actually quite easy to accomplish programmatically. The first three steps are done every day by tools regularly used by SEOs.The only difficult part is finding a way to encourage Google to visit the referring pages it has not yet indexed. We can solve this by simply having a widget on the page that displays those referrers, essentially an "As Seen On" bulleted list of pages that had linked to your site, but had not yet been indexed.
Well, I have a treat for those of you who are or know someone with some half way decent programming skills. Here is sample code that does just this on your typical LAMP (Linux, Apache, MySQL, PHP) installation. A word of warning - it is highly likely that this code is buggy. Make sure that you check it and make modifications before running it on production. All you need to do is install the script on any pages of your site for which you would like to perform real time referrer indexing.
This is exactly the type of set-it-and-forget-it SEO that I love. Simple techniques, simple solutions, long-term results.
So let's move on to another set-it-and-forget-it technique.
Strategy 2: On-the-Fly PageRank Recovery
Alright, so if you haven't heard of PageRank Recovery before, you are going to need a quick little lesson. Whenever someone links to your site, but screws up the URL, the PageRank that flows through that link essentially evaporates. I am pretty sure that it ends up in Matt Cutt's personal PageRank stash, which he has learned to convert into a powerful foodstuff that he consumes prior to mountain climbing and running marathons. But I digress, if you can find where those broken links point to on your site, then 301 those URLs to a real page, you can "recover" that PageRank. Virante created a tool to do just that based on SEOMoz's Site Intelligence API which Rand highlighted a little while ago, but it still requires you spend time going and running the tool regularly. I want to be lazy and have my site recover PageRank for me while I watch The Facts of Life dressed in a Snuggie and downing 5 hour energy shots. So here is how it would work:
Ideally, our program would do the following...
- The script sits in your CMS right before a 404 is fired. If you don't have a CMS, you would direct your HTACCESS file to pass all 404 traffic through it first.
- The script captures the URL that the visitor or GoogleBot tried to visit.
- The script somehow magically knows what URL you MEANT to visit.
- The script 301 redirects you there.
What's that you say? "But Russ, our programmers don't know magic. They are all muggles. And even if they did know magic, I can't find a USB powered wand anywhere these days." Well, I am bringing you good news from some friends: Mr. XML Sitemap and Ms. Levenshtein.
If you were paying attention to countless blog posts in the SEO world, you should have an XML Sitemap which keeps record of all the URLs on your site. This is a good start to the magic that is On-The-Fly PageRank Recovery, because now we know all the possible URLs your visitor or GoogleBot may have been trying to reach. Now, we simply have to find the most similar URL to the one the visitor came to. How do we accomplish this? Levenshtein Distance.
Levenshtein Distance, also known as the Edit Distance, is a measurement of the minimum number of changes necessary to convert one piece of text into another by adding a letter, removing a letter, or substituting a letter. For example, the Levenshtein Distance between the words "Rock" and "Russ" is 3, because we will have to substitute the O, C, and K with U, S, and S. Below is an example of how Levenshtein Distance could be used to find two similar URLs:
So, the way On-the-Fly PageRank Recovery works is by reading all the URLs in your sitemap and then comparing the Edit Distance between those URLs and the URL your visitor entered. If the server finds a close match, we then 301 redirect rather than show a 404 error. Subsequently, when a Googlebot tries to visit those previously 404 pages, it will instead find that 301 redirect and appropriately pass the PageRank through to the intended page. Plus, On-the-Fly PageRank Recovery is a huge usability win for visitors who now don't have to try and search your site to find the correct page.
Want to give it a test drive? Try any one of these broken links back to Virante and my blog, TheGoogleCache
- Virante's Tool Page: https://www.virante.com/se9-toolz
- Second Page Poaching: notice the dollar sign in the url
Now, It would be hypocritical of me to talk about setting it and forgetting it, and then make you go out and do all the work yourself to get it up and running. So, in the spirit of laziness, I have included a couple of options for you to use as well. Of course, double-check everything before you go into production with any code you ever get on the internet, regardless of whether or not it is on a trusted site like SEOmoz.
Final Thoughts
There are incredible opportunities in the world of Search Engine Optimization that we have only begun to address. So much more can be done in terms of describing, detecting, and repairing SEO issues all in a programatic, automated fashion. These are just two of them. Good luck, and keep inventing!
The Real Time Referrer is an interesting concept, but it seems as though it could be easily scammed, which could hurt your rankings if someone makes your site link to an adult site.
For instance, it's trivial to spoof the referrer with a simple curl command:
curl -e https://some-adult-site.com https://your-site.com
With this script installed on your site, anyone could run the above command to cause your site to automatically post a link to anything they want.
If spam protection is taken into consideration and is added to the isGoodRef function, then this could be fixed. Without spam protection, you'd be leaving the door open to spammers.
The script actually checks to make sure that the link is actually on the page before it is included. Unless they determine your server IP and write a cloaking script to deliver the link on the fly to bypass it, you are safe. It is possible, but more difficult than what you describe.
Thamks for the heads up though.
Yes, you're right -- and it would depend on how Google indexes and caches spam/adult sites. I'm not an expert in this area, but I'm guessing that Google doesn't cache every page it crawls, especially if it determines that the page has spammy or malicious content on it. If that's true, then a spammer could create a link on his site pointing to your site, then click on the link to cause your page to automatically link to the spam site. If the page isn't cached in Google (because it's too spammy), then your script would let it in.
Like I said, it's an interesting concept for helping Google discover links that will increase the rankings for your own site. I don't mean to downplay the idea -- I just wanted to point out the implications that this may have so people understand what's possible and what they'll want to protect against. Hope it helps someone.
Thanks!
No, I'm glad you pointed it out because it is potentially exploitable with that cloaking method. We could add some code, though, that checks if Google has cached the page after we know the link was added, and if the link is still not there in the cache, we know they are cloaking, and could pull the link. Not too hard to program.
Client: "but our web development vendor said that SEO was built right into the CMS!"
SEO Analyst: {shaking head}
People always tend to forget that SEO is a competition. You have to do more and be better. NOTHING is ever complete.
Coming from a guy who likes to build links and thinks the on site part is not nearly as important, I really loved this post. Like, I loved it ALOT.
By far one of the best youmoz posts I've ever come across.
Russ, here's an idea. Actually make those ideas into actual scripts, tell people about it, and then become a viral sensation! They're awesome ideas, so might as well run with it while you can so no one steals em!
I like your hypnosis.
Happy that I didn't miss that post and that it went straight to the main blog! Good job!
Thanks for giving some great tips on automating some processes and reminding us that we need to take care of our content (and the incoming links) once they've been published. I feel super inspired to play around with these on my own sites. Also, thanks for the humor you've brought to the SEO technical world. :D
Thanks, this is a fun industry to be in, might as well make light of it.
Thanks so much! This post opened up a whole new way of thinking for me!
The SEOPaladin ;-)
Nice one.
GREAT post here, Russ! I implemented the Predictive 404 plugin on my site yesterday and I must say...it's magic.
I did have issues getting it to activate, so I had to add this line "Text Domain: magic-404-fixer-upper" (that's the official name of the plugin now) under the URI line in the head section of the plugin in order for it to work. Beautiful job man.
Creative post, and nice use of linking out..
The Levenshtein Distance is great! I'm going to investigate more about this
Thanks a lot for your great post!
"I am pretty sure that it ends up in Matt Cutt's personal PageRank stash" > That made my day!
Oh, and line 73 of the first strategy requires another squiggly bracket ;)
Me too. And for some reason, all of a sudden I really want a Snuggie....
Seriously a great post.
Thanks!
Just WOW! Thanks, Ryan
Yeah, I remember when I first installed the 404 tool on my blog and it worked. I nearly pooped myself.
Ok, maybe nearly is the wrong word. Actually pooped myself.
Hey Russ,
Gonna be honest with you- I have a massive boner for this article. Stellar work my man.
Looks like I am going to need to dust of the old php manual to get this into my skull. I promised myself to start coding last year at the distilled event in London you spoke at but have yet to move my ass. On the other hand I did start trying to experiment with Goole boundries by trying to tank my own websites and do some testing, which has served me well. GOing to install the plugin tonigh an mess about with it.
Brilliant stuff, Russ. Thanks for providing the source code for these tools as well. Already implemented the Worpress plugin for my dreadfully empty blog at https://thesearchguy.com/blog/.
One thing i noticed: I couldn't get the plugin URI to load, FWIW. I thought it was simply due to a spelling error (you have https://www.thegooglecache.com/preditive-404 instead of https://www.thegooglecache.com/predictive-404) - but neither work. Not sure how much that matters, but thought I'd mention it.
The Real Time Referrer is great concept as well. Too often, I think we focus so much on building links that ideas like this get pushed down the priority list. What's the point in creating great content, getting attribution for that content, only to have Google not recognize the attribution, right?
Thanks again!
Jason
Funny that an SEO didn't get the attribution link right. No biggie, it only shows up behind the admin panel anyway.
Why not just +1 the linking page instead?
Google should see the link with no problem.
That is an interesting idea, but it is not "set it and forget it" :-). Imagine +1 every unindexed referrer on a 100K page website... It would definitely make sense for a smaller site, though.
Awesome Tool and great post!
Thanks! Always fun to release stuff that gets used.
If you don't want a reciprocal link from your main site to a site linking to it even temporarily, because of fears over reciprocal links at least partially devaluing each other, then why not put a new script on a second, less important site that is indexed regularly by Google? On your first site receiving referall traffic, your second site would be alerted of the source, and would then send a temporary link to the referring site.
Definitely possible, but you have to find an excuse to add those links, otherwise it might look like a list of paid links :-)
You could label them as "friends" ;)
Twitter is also a great way to get pages indexed, so long as you use it enough. Might be a good way to get those pages with links indexed (not automated but a good solution anyhow)
I find this article interesting because I have created an automated service utilizing the Google Analytics API. The service is call InboundLinkAlerts (see it at www.embeddedanalytics.com ). The service is similar to the 5 step process outlined above. First we download every referrer to your site. Then on user-defined intervals we query links to your site for the last 24 hours. If we find a new link we then spider the page to see if the link actually exists. If it does we send you an email.
You might find this service useful if you want to keep tabs on new Inbound Links.
Nice Mark,
The logical next step would be to add some backlinks to the backlink (fast indexer type) then ping all of the links.
Dreaming...
Great article Russ will be testing and playing with these. Keepmem coming
I just might. This is only half of the presentation I gave at search exchange in Charlotte. Aspiring SEOs might want to search for that online and read the rest.
In strategy 1, this creates a reciprocial link which, last time I tested, numbs the weight of the backlink. Am I missing something?
Would it still numb it if Google has not indexed the other site?
After Google does index the link it wouldn't credit it as you hoped it would. In my experience any type of reciprocial link is devalued because it looks like a partnership (I'll link to you if you link to me) so Google "numbs" these types of links.
I have just the same experience as daniel does.
I think that's why the writing in the blue bubble mentions "temporarily linking..." It will no longer be reciprocal if you eventually remove your link. It's there simply to get your backlink recognized in the first place.
I also don't think there's any penalty involved with doing that so long as you're not removing hoards of links all at once. I could be wrong there. Any input?
Hopefully whatever type of profiling Google uses to determine what is reciprocial does not include your full link history of the page.
I like the idea, I'm just skepical of linking out =)
I think a lot of people are, so you're not alone. I don't think it's anything to be afraid of though, so long as it's done moderately (and not all reciprocal). At the very least it can be a signal of participation and providing good relevance to users. Maybe that's just me being too optimistic :)
The reciprocal link only remains posted until the page is actually cached. Once it is cached, the widget pulls it out. This way you get it indexed but it only remains reciprocal between the time Google indexes your referring page and the system discovers Google has found it.
But it remains in your link history permanently. I don't think there is a solid concept of temporary with link profiles.
Perhaps you could host the widget on a completely different domain from the one being linked to. That way it doesn't look like a reciprocal link?
I actualy have found some value, at times - even recently - of linking reciprocally if it is topically or geo-relevant. IMO I think the bigger issue here are referrer spammers who are going to take advantage of sites that use this script to help get their own pages indexed faster and to get a temporay link / boost on long-tail queries.
What if I wrote a counter-script that looked for the script above on all sites throughout the web, and then sent a referral from one of my pages, perhaps an auto-generated one, so that I show up in your "as seen on" list all the time? You could block those referrers, but I think they're going to add up over time, espeically if a bunch of people are using the same script that blackhats could grab hold of as a footprint.
The 404 idea happens all the time accidentally when users are 301'd to soft 404 page. I guess if you went about it in a more intelligent way, as you have outlined, it would work better. But you'd still want to capture the event, including the referring URL and/or query, lest you lose your insight into your 404 errors, which is a very important part of diagnosing bigger issues in your architecture.
Great post though and I would have loved to have been at the presentation. Next time!
Hey Russ, this is one of the most advanced and quality hands-on posts I have read for a long time! Thanks for sharing this!
Strategy 1 is old fashioned backlink indexing (which most people do), but strategy 2 is definitely interesting. I disagree with those who say that PageRank is dead. Google's underlying algo is still based on links, even if the Panda overlay is all about on-site stuff
Great and practical post. Thanks!
Like a True Irishman would say BLOODY EXCELENT!
Best Regards,
Richard A.
Programmatically checking cache dates sounds like a violation of Google's guidelines (no automated queries). Also, I can't think of a reason to repeatedly access a cached page directly (rather than via a SERP) - do you think that could raise a flag?
Might be things to consider when evaluating / implementing this, unless I'm looking at it the wrong way.
Also wanted to say it was great to meet and chat with you after your last session at PubCon, Russ. The two presentations I saw of yours were some of the most noteworthy of the conference.
Russ and mozzers. There is such a thing called "set and forget in SEO"!
We built a product 10 years ago used by top-tier and small customers. We call it D2S. It is going the extra mile of having SEOmoz (or WebCEO) on-page suggestions and creates thousands of a well optimized website. Current CMS clients are using Enfinity, WordPress, Drupal, Joomla, xCart and in-house CMS.
D2S is the new layer of the Internet that take care for SEO on-page automatically: HTML -> CMS -> D2S
It using Levenshtein formula (and other formulas) much greater than the way it describes in your post.
It comes with built-in site search that integrated with the SEO process. Yes, it will optimize the internal search terms people are using on your site and boost the long tail factor. E.g. D2S will learn burgundy color search phrase that related to your brown (or red) products. It will automatically add 'burgundy' to the brown metatags, and/or to the footer of the brown products (depend on the configuration. It can search 'burgundy' on your RSS/social feeds and push the associated blog/social piece into the footer too).
On October 2010 we have introduced a social module too. To see the system menu and read the Press Release, go to
https://www.prweb.com/releases/Search3w/D2S/prweb4549194.htm
Awesome post. Very insightful and inspiring.
Anyone interested in learning more about programmatic search check out my blog I have created for my digital marketing class https://thebestdigitalmarketingcourses2016.wordpress.com/2016/02/09/programmatic/
Trying to install the php script for PR recovery. But how? Copy the PHP code, paste into a word file and name to xxxx.php place in root dir? Confused
Word files tend to muck things up. First thing I'd try is plain notepad. Other than that, you'll need to wait for help from the OP if that doesn't work.
i tested your pagerank recovery tool for my website https://tamilanads.com.
that's worked well.i will test all your tools and update via this comment.
i'm new to seo and blogging. your website is helped a lot.
thank you :)
Nice Russ. Got me thinking about what else is "set and forget." A lot of people forget how much value they have with author trust. It took me a long time to realize that a site I've been the admin contact for is adding a huge amount of trust to me as the publisher/author/verified G owner of my website.
I bought, built, and have been the admin for a site is a a four letter dotorg: registered in about 1999, never had ads or sponsors, had strong links from sites in CNN.com back in early 2000s. It was basically a set-and forget when it was built as an education resource and a homepage for a small foundation. in fact, it looks horrible now. It still uses frames. It's may be one of the most retro-outdated sites I know. I've maintained it for 12 years for client with about an hour a month on average.
Cost of site was basically the last 12 years of hosting plan and original design and dev I did for owners with Adobe PageMill (remember that tool anyone?)
The WHOIS data is tied to my Goog acct (which also has a lot of trust, and was opened early on)
For my site, I could have the publisher be a brand, or use brand of ME in WHOIS and contact page. Google gives ME tons of love. The author trust means I ranked for words with very little strong links after the site was only 4 months old and my other accounts (EG: linkedin) where not totally "roped in" to my google acct.
So, don't forget how much authority is out there with, for example, old whois data. You've been one of the contacts in WHOIS for 12 years and the site is has huge trust rank? You set it: now don't forget it.
You are delivering valuable information for others Jones !! Thanks!! Keep it up ......
Hey Russ, Great Post!
Really Informative!
I just bookmarked this link.
Thank you for sharing it's really helpful for me.
Russ, thanks for sharing your creative ideas and solutions to a couple SEO challenges which affect everyone. The ability to automate these tasks adds a tremendous amount of value. I find your article truly inspiring as it makes me think about what other tasks can be automated.
A great piece Of Work... nice post
After playing with the Real Time Referrer thing I came across several annoyances that make it harder, or could even prevent it from working automatically :
- first thing to note, the data I pull from Google Analytics is not exactly the same as the one in the Google Cache/Index or in the real referrer host : the protocol (https://, https://, etc.) and subdomain (www.) are lost in the GA API, so that you have to check whether your link exists AND that your site is cached with several combinations (example.com, www.example.com, https://example.com, https://www.example.com)
- second, some sites using PHPSESSID or other variable session ids might not be cached under the same name. Thus Google will keep on telling you the page is not cached (while it IS, but under another URL). This will result in your script showing this page for indexing FOREVER. Maybe a timeout or a counter on the number should limit the printing of these URLs...
- some pages are deliberately NOT cached/indexed by Google as per Robots.txt rules. Here I see two problems that may arise : 1/ like in the previous point they WON'T BE cached in the end, so you'll end up with an infinite display of this URL 2/ these pages may have been disallowed indexing/caching for PRIVACY purposes, so that linking to them will create a serious security matter.
I'm not to say that your tip is useless, on the contrary. What I think though is that it should be handled manually and controlled carefully. I cannot just "set it and forget it".
One use would be to list all the "missing indexes" once in a while and post them to Google+1 after checking that it is allowed/necessary.
Great post. Having just started looking at SEO this year there is clearly lots to learn
I am an italian man. I am an engineer in Italy and I deal me of planning and structural calculation and of housebuilding. My site internet is: https://www.calcolostrutture.net .
I wanted to make my compliments for the good quality of the forum.
I wanted to ask to the forum if there is a way of reactivating ad sense if eliminated... can I activate ad sense on a different site in comparison to that in which I/you/they have been eliminated of google for activity of not valid click? Or ad sense it realizes only putting the same data of the beneficiary and will it disarm me the same the account?
Thanks
Sorry, but it will be really hard to reactivate adsense now for 2 reasons.
1. AdSense will look at the registrant information for patterns.
2. If you vary that and are able to get up a new account, they will see that you are putting ads on the same sites as the previously banned registrant.
In the future, though, you should ask your questions in SEOMoz's excellent QA section - https://www.seomoz.org/qa
Quick! Someone make the 404 thing into an "official" Wordpress plugin that I can install. :)
Nice info about "Pagerank Recovery", I understood almost everything with some questions(beacuse I am not an Adavanced SEO..I think) Is that "On the fly Pagerank Recovery" is similar to the "second Page Poaching" because many of the steps are similar except here we can make it automated.
Q.How would the Leveinstein distance concept can be used practically(if we don't use any script)? Do we need to make the possible url combinations manually by replacing the letter such as "S" replaced by "$", "O" replaced by "0" etc.? The problem is because my website is made in ASP.NET and I can't use PHP script,wordpress plugin and drupal so I need some other solution.. :(
A Request: It would be very helpful for me as well as for others(may be), if I could find the basic overview of working of your "Page Rank Recover tool",does it tries to find the broken links presnt on our site, (I tried for my site but found zero..saying as Good Job), I'm confused about it, does it find broken links(redirecting to 404) present on our site?
Please solve my doubts.. thanks.. Lots of thumbs up for you!
The Wordpress-plugin just gives me a blank page when I try a wrong URL. There's no blank row at the bottom or end of my file.
It may conflict with an existing plugin. As I mentioned, none of the code has been widely tested, so you may have a unique experience.
Great post Russ, thanks very much. I really like the out of the box thinking you've got going. I'm in the same camp as you and think a lot of an SEO's tasks could be automated with a little thinking and you've set us on the right path and given me some ideas already!
Thanks,
Ken
Thanks for the vote of confidence!
Wow, you've got some really solid and well-thought-out strategies here. I'm definitely going to have to look at that WP plugin. Thanks for your work and the links to your tools!
Thanks. I hope people make the most of it - there is a real opportunity here to solve both an SEO and a common Usability issue.
Oh my. I just activated this plugin on my site.
As soon as I did, I threw my hands into the air and exclaimed "BLACK MAGIC!"
It's true, you can ask my wife.
Also, is there a reason you haven't submitted the WordPress php script as an actual plugin? It was pretty easy to stick in there myself but just wondering why it wasn't a plugin in the first place.
Hey Russ, nice article. These are great forward thinking ideas and good use of current resources.
Thanks!
For those of us who are not php guns, what do we do with the generic php script?
Do we add it to the 404 file? Does it sit somewhere else and need to be called?
Thanks
Wow there's really a fantastic amount of information in there - geat point re the 404s
"I don't know what the hell you just said little kid. But you reached out and touched a brothers heart." ~ Tracy Morgan
Great info - sending the CTO over to make sure we can set it and forget it too!
An interesting post Russ, can see a lot of hard work went into it.
Levenshtein Distance is an interesting concept, can see it working well for large sites where it's not realistically possible to know which pages/URLs are similar across the site.
Good work :-)
We handle this a little diffently by analyzing the URLs prior to and building some metrics that are easier for a database to look up... for example, imagine indexing the URLs as normal, but also having a separate table with the URL ID, the string length of the URL, and then a metric that converts the URL into a number by adding the relavent char codes together. You can then limit your LD tests to URLs that are within 3 characters in length and within X in combined value. This is a lot less DB and server intensive as you whittle down the total number of URLs tested.
The transition to plug-in configuration gives the message "Do you have sufficient privileges to access this page." what does it mean? thank you
Great post. For WordPress users, there are some great plugins out there to take care of strategy #2 automatically :)
It's called SEO Quake. Easy to install and easy to use.
Wow. You're a smart guy. Impressive!
I say that to myself every morning when I wake up. It's a little pathetic.
First with the PageRank recovery tool. WHY would you want to bother?
PR is dead as a SERPs influence. Links to influence "rankings or PageRank" are black hat.
And how many would you get?
As for the SEO tool, I do not see the need at all. The author states "there are likely to be a handful that Google never got around to crawling". I checked 50 of the sites linking to me and all were indexed.
If a site has ONE link on another crawled site, you can bet they have been indexed. I have had sites indexed even before I had them linked. Now that linking for SEO has been cast as a villain, the great majority of influential factors has migrated to the on-page work. This means that once you build the page, it gets indexed at the level the onpage work determines. If you do not change the content, if it is not a "freshness" based content, or there are no competitors that out SEO you, or Google does not "update" you out of your position, the page will remain at its indexed position. Set it and forget it.
Does the software access Google's db? It sounds like it does. If it does, this is against their TOS.
best,
Reg https://nbs-seo.com
Amazing russ...