Browsing Digg this evening, I came across this entry, which, since being posted 10 hours ago, has received 473 diggs and is on the front page.
What is it? Why it's a brilliant article from Direct Magazine - BigDaddy Means Big Changes at Google. Note the high quality, well researched content Mr. Brian Quinton has put together for us:
The new BigDaddy data center contains new code for examining and sorting the Web, and once it has been tested fully, will become the default source for Web results, according to Yahoo!’s chief search engineer Matt Cutts. In a January 6 post on his blog, Cutts said that might happen in early February or March of this year.
Let's count the errors in just this single paragraph, shall we?
-
Matt Cutts doesn't work for Yahoo!
-
There's no link from the article to the blog entry (perhaps so users don't find out that he doesn't actually work for Yahoo!)
-
Mr. Cutts' entry was on January 4... perhaps 21 day old news is simply too old, so they swapped dates. And hey... wasn't it mentioned much earlier than that... way back on Nov. 30 2005, I believe.
-
"contains new code for examining and sorting the Web" - only somewhat inaccurate. I believe he wanted to say "contains a new algorithm for ranking." Anything beyond that would be stretching Matt's words.
Sadly, that's not nearly all.
While search optimizers often know where to find a Google testing data center and have usually tried to go there to see how the pages they’re working on are being searched and indexed, those IP addresses change often, even in a day.
Some of these changes will bring Google’s indexing technology up to par with its competitors; for example, Yahoo! and MSN have been handling 302 redirects for a year or more, although perhaps not as effectively as BigDaddy will eventually do.
The new search bot is more flexible, seems faster and can read non-text content more readily; that should mean that in time, it will be able to read links within images and even within Flash video, matter that gets ignored by bots that can’t speak Javascript.
Why can't articles about SEO in non-SEO publications ever be accurate? It makes me wonder if half the things I read in the news have any bearing on reality. I'd almost be more forgiving if the author didn't already have experience in the field of reporting on SEO. Do experts in every area of technology and the web have to deal with this from media sources? Can you tell this is an issue that gets under my skin?
But, let's end on a positive note. Direct did manage to interview Danny Sullivan to get some feedback about this, and he was terrific. Look how nicely he mentioned my boys in blue:
“If you want to go to the Seattle Seahawks page on the NFL Web site, you’ll get this long, horrendous URL,” Sullivan says. “But the site also has another URL that’s just ‘Seattle Seahawks’.
Thanks Danny - the hawks need all the mentions they can get.
just to add a little hype to this subject...is it me or does godaddy feel like it is going live? we have noticed a big jump on our rankings via google api, and I am wondering if this has to do with godaddy?
"Why can't articles about SEO in non-SEO publications ever be accurate?" ... umm. In short, lack of research skills, lack of research, incompetent (or no) editors.
"It makes me wonder if half the things I read in the news have any bearing on reality." Yes. It's difficult to read the mainstream media without wondering how some of the reporters manage to cross the street without a white cane and a guide dog. See above re incompetence.
I think that the guy mentioned pretty accurate informations. His only mistake was the Yahoo employee thing about Matt Cutts (i am positive this was a mispell, or a mistake, because he obviously written in the past, and knows the industry fairly well) and that he kinda' puts SEM specific terms in his own words, and not tehnical.
Cristian Mezei
I think a lot of folks would disagree Cristian - look at the intial post, it points out more than a half dozen inaccuracies and plenty of poorly researched material besides. It's not just the single error.
Jasongolod is right on target.
This is the problem with 1/2 of the articles that appear on the web. They are written by persons other than experts and for reasons other than accurate reporting of fact.
I was hoping that the diggs were all from people like us commenting on the mis-truths, but obviously not. This comment scared me:
""The new BigDaddy data center contains new code for examining and sorting the Web, and once it has been tested fully, will become the default source for Web results, according to Yahoo!’s chief search engineer Matt Cutts."
Yahoo? Typo anyone?"
No, Yahoo! is supposed to have an exclamation point after it everywhere.
I'm likin' the new algorithm. -posted by PathDaemon (0) at 10:12 PM 1/25/06
Sounds like it's time to form our own digg network and push up real content up on digg.
DUUUUUUUUHHHHHRRRRRRRRR.
Not only does he lack in fact checking skills, but also in mastery of the English language. What a pitiful article.
With this poor journalistic sampling in hand, I feel like *I* should take SEO to the media. After all, that's what I studied in college. But I'm afraid that'd be a conflict of interest.
ay yay yay.
>>bots that can’t speak Javascript
Talking bots :)
Yeah, I think I'm gonna get into journalism too!
I don't think the poor reporting is in and of itself the sad part. The fact that it got 473 diggs is the sad part.
The sad thing is, I think this happens all day long in all types of "reporting." We are all more sensitive to this particular case because we talk about this stuff all day. But, I see it just about every night when I watch the "news" on television. Sad part is, very few people actually know the difference.
That's when you see that kind of idiocy that you realize that what you read on the news is not always true. Damn, I think someone should write to the editorial directors and let them know that this guy is misleading his readers, what do you think Rand? Maybe you could replace him?