Browsing Digg this evening, I came across this entry, which, since being posted 10 hours ago, has received 473 diggs and is on the front page.

What is it? Why it's a brilliant article from Direct Magazine - BigDaddy Means Big Changes at Google. Note the high quality, well researched content Mr. Brian Quinton has put together for us:

The new BigDaddy data center contains new code for examining and sorting the Web, and once it has been tested fully, will become the default source for Web results, according to Yahoo!’s chief search engineer Matt Cutts. In a January 6 post on his blog, Cutts said that might happen in early February or March of this year.

Let's count the errors in just this single paragraph, shall we?

  1. Matt Cutts doesn't work for Yahoo!
  2. There's no link from the article to the blog entry (perhaps so users don't find out that he doesn't actually work for Yahoo!)
  3. Mr. Cutts' entry was on January 4... perhaps 21 day old news is simply too old, so they swapped dates. And hey... wasn't it mentioned much earlier than that... way back on Nov. 30 2005, I believe.
  4. "contains new code for examining and sorting the Web" - only somewhat inaccurate. I believe he wanted to say "contains a new algorithm for ranking." Anything beyond that would be stretching Matt's words.

Sadly, that's not nearly all.

While search optimizers often know where to find a Google testing data center and have usually tried to go there to see how the pages they’re working on are being searched and indexed, those IP addresses change often, even in a day.

Some of these changes will bring Google’s indexing technology up to par with its competitors; for example, Yahoo! and MSN have been handling 302 redirects for a year or more, although perhaps not as effectively as BigDaddy will eventually do.

The new search bot is more flexible, seems faster and can read non-text content more readily; that should mean that in time, it will be able to read links within images and even within Flash video, matter that gets ignored by bots that can’t speak Javascript.

Why can't articles about SEO in non-SEO publications ever be accurate? It makes me wonder if half the things I read in the news have any bearing on reality. I'd almost be more forgiving if the author didn't already have experience in the field of reporting on SEO. Do experts in every area of technology and the web have to deal with this from media sources? Can you tell this is an issue that gets under my skin?

But, let's end on a positive note. Direct did manage to interview Danny Sullivan to get some feedback about this, and he was terrific. Look how nicely he mentioned my boys in blue:

“If you want to go to the Seattle Seahawks page on the NFL Web site, you’ll get this long, horrendous URL,” Sullivan says. “But the site also has another URL that’s just ‘Seattle Seahawks’.

Thanks Danny - the hawks need all the mentions they can get.