Intro from Rebecca: Eric Enge is a guest blogger for SEOmoz and has contributed various posts on link building. Today he'll be tackling the Google Ajax API. Enjoy!
At Stone Temple Consulting, we've spent some time playing with the Google Ajax Search API and the Google Ajax Feed API. These are great tools for embedding dynamic content in your web site.
This post is going to talk about how to use these two APIs to do just that.
Ajax Feed API
This is an API from Google that allows you to process feeds (such as RSS and Atom) to extract content from the feeds and embed it on your site. You can see the Developer Documentation here of use of their Dynamic Feed Control Wizard.
To give you an idea of how this works, let's create a sample which shows you the last 8 posts from the SEOmoz blog. Here is what it looks like:
This is a cool thing you can do on a web site that has a blog on it, but also a lot of other content. You can cross promote your blog on the non-blog pages of your site, or even integrate in posts from a series of third party blogs on related topics, as a way of creating your own best of breed lists of posts.
The great thing is that this feed is totally dynamic. To confirm that, check out how it changes as new posts go up on the SEOmoz site. Now, let's take a look at the code required. First, you need to place this Javascript somewhere on the page where you want this to display:
For those of you who (such as me) are not Javascript savvy, we can still pick out a few things from the source code. The first is that you need an API Key. If you look at the first line of the script, that starts with "src="https://www.google.com/jsapi?key=AB ...", the thing after the word "key" is the API key specific to SEOmoz.
You need this key so that Google can verify that the code is running on the correct domain. Each domain needs its own key, and the code needs to be running from the domain for this to work.
Then, if you look carefully at the code, you will see variables defined, such as "p", "a", "href." In fact, what happens is that the code dynamically inserts a bunch of <p> and href statements dynamically into the page. So, the formatting is really quite simple.
The other thing to notice is that the "var container" statement near the top refers to an element ID of "feed." This is the key to where the output of the Javascript gets placed on the page. To complete the implementation, you need to put the following code in your source at the place where you want the output to show up:
<div id="feedseo">
</div>
With this, you should be off to the races. As I mentioned previously, a more sophisticated programmer can use this API to extract interesting posts from a number of feeds. Of course, this type of programmer may be able to go straight to the feeds themselves to extract the data, but the Ajax Feed API virtualizes this.
In other words, there is no need to worry about the type of feed (RSS, Atom, ...) of the particular version level of feed standard. Certain elements are also handled automatically for you. This certainly makes life easier for hackers like John Biundo (my partner) and I to create some basic implementations.
Ajax Search API
Now let's look at the Ajax Search API. With the Ajax Search API, you can tap into a variety of Google search properties. These include:
- Web Search
- News Search
- Video Search
- Blog Search
- Local Search
- Google Maps
- Google Custom Search Engines
In addition to tapping into these search properties, you can dynamically show the results on your web site (in other words, the user does not get sent to Google to get the results). This provides a really nice way to integrate search functionality more fully into your own site.
Let's look at an example of a canned video search I created to display some of the popular Blentec "Will it Blend" videos:
The great thing about this it that it appears (and will play) inline directly within your own page. To create your own "video bar," just go to the Ajax Search API Wizards page on Google. Note that this version performs video searches on YouTube and presents only those in its results. So, let's take another look and see if we can create a version that will pick up the Whiteboard Friday videos from SEOmoz:
Now you have a way to extract videos from the site of your choice. Google provides wizards for quickly creating these types of results for videos, maps, news, blogs, and books.
Summary
There are a lot of reasons why you may want to play with this stuff. They all lead into the desire to create a compelling experience for people on your site. SEOmoz could, for example, use the above Whiteboard Friday script to promote those videos elsewhere on its site.
hi, over the weekend i played with these thingies, too. the result was
https://www.creativspace.at/
basically the google image search made cool. like searchme but with google powered (images, nothingtheless) results. the images tags and caption-content are rendered by the server side to have somtething in there when the crawlers come.
nothingtheless: every SEO should dig into the documentation, there are some hints in there to understand google just a little bit better.
Found this php script long ago (I wish I could remember to give them credit) that does pretty much the same thing. I commented out the part where it prints a description, so you can add that back if you want. I also I added a loop so that it only prints out 5 entries. You can change that number, of course.
<?php
$insideitem = false;$tag = "";$title = "";$description = "";$link = "";$count = 0;
function startElement($parser, $name, $attrs) { global $insideitem, $tag, $title, $description, $link; if ($insideitem) { $tag = $name; } elseif ($name == "ITEM") { $insideitem = true; }}
function endElement($parser, $name) { global $insideitem, $tag, $title, $description, $link, $count; if ($name == "ITEM") {
if ($count < 5) {$count = $count + 1;
printf("<dt><b><a href='%s'>%s</a></b></dt>", trim($link),htmlspecialchars(trim($title)));// printf("<dd>%s</dd>",htmlspecialchars(trim($description))); $title = ""; $description = ""; $link = ""; $insideitem = false;}
}}
function characterData($parser, $data) { global $insideitem, $tag, $title, $description, $link; if ($insideitem) { switch ($tag) { case "TITLE": $title .= $data; break; case "DESCRIPTION": $description .= $data; break; case "LINK": $link .= $data; break; } }}
$xml_parser = xml_parser_create();xml_set_element_handler($xml_parser, "startElement", "endElement");xml_set_character_data_handler($xml_parser, "characterData");$fp = fopen("https://www.americaninbound.com/blog/wp-rss.php","r") or die("Error reading RSS data.");while ($data = fread($fp, 4096)) xml_parse($xml_parser, $data, feof($fp)) or die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser)));fclose($fp);xml_parser_free($xml_parser);
?>
And I thought html looked ugly!
Well it's not useful for the search enignes to index it from a quick look as Lynx Viewer can't find it but usefulness to a user outweighs that :)
Thanks for bringing this to light
Edit: One cool thing I have found with this is doing a blog search for my web name (dudibob) shows all the places I have recently commented. With this and twitter you publicly stalk whoever you want! </tinhat moment>
I love it. This is a great way to add latest and fresh content to your site for the users.
Question - Given that its behind Javascript, I don't think it will help from search engine perspective. Is there a work around to that?
If there is one and now that Google is promoting duplicate content on its own, how will Google differentiate this from other pieces of original content on the site?
I was surprised to find this post at SEOMoz. I agree with Rajat, these are cool tools but if you want to capitalize on the content you're pulling in, then you should utilize server-side scripting that the crawlers will index. A follow up to this post might be, "Why you should pull content in server-side rather than leveraging javascript APIs".