Unfortunately, up until I started working for SEOmoz, I was accustomed to surfing into a site and taking it at face value. I now have to train my eyes to see things as an SEO expert would. How do I train myself to do that? I'm currently taking the hands-on, shadowing approach with Rand, but I recently read an interview with Michael Gray (aka Graywolf) on the Sootledir Marketing Blog that offered another site analysis tactic:
When I was first starting out I’d do it with index cards writing down all of the onsite factors for site1.com on one index card. Then I would do the same for site2.com and site3.com. Do the same for links. Then take all the links cards and spread them out on the table looking for commonalities in linking patterns.In terms of visual, hands-on exercises, his advice is pretty sound. It's a good way to get assimilated and accustomed to comparing and contrasting different sites and training yourself to catch site weaknesses.
Do any of you have additional advice to offer an untrained eye? What's an innovative, clever, or simply tried-and-true way to train yourself to know what to look for when analyzing a site (other than the typical get-better-the-more-you-do-it approach)?
Thanks for the advice, everyone. I know that my skills will really only improve with practice, experience, and time, but it's good to know what I should be looking for.
Good questions Rebecca. I get asked those things alot and I wish I could give a more complete answer but honestly at this point after doing it for so long I can surf a site for about 20 min and have the whole list in my head of what SEO factors are good/bad/existent/nonexistent. Some of the highlights that should start jumping off the page for you in short order are:
- image link vs text - text of links - do your own crawl of the site and compare your numbers to Google, Yahoo and MSN - is the site fully indexed? - site:domain.com on any engine - snapshot of title tag differentiation - strength of brand determines whether brandname should be at start or end of title - flash? - links of course - view source - css and js inline? external?
That's simply the first 10 min of any analysis but it certainly paints and immediate broad picture before you start spending real time analysing backlinks etc.
Excellent advice, everyone.
Don't forget to ask yourself, "Would my friends, family members, and myself use this site again?" Remember that in the end, the site is for human use. While a spider may like the site, that is worth nothing unless if human users will select another site for their needs. Human experience is also a crucial part of SEO.
On the tool front - the Web Developer extension for firefox is excellent!
It makes it so easy to see everything thats going on on-page wise. It's brilliant seeing hidden links in all their glory!
Right Thomas, Mozilla also has excellent easy to use extentions for SEO where Alexa and PageRankshows at the bottom bar in the browser. The SEO tool is to switsh on and offand its great options that appears when you search for website rankings or what every you need...
Another thing. I feel i need to read about XHTML CSS standardsn ot always witting here lika an L...any suggestions for books in English, preferably written 2006 or later haha
Michael
Michael, Cameron Moll wrote an excellent guide to css a few days ago. Its got a list of recommended books on it. Its also got a lot of starting points for finding more information.
Step 1: make sure the site is Googlebot friendly:
- Multiple query strings, session ids, &id= params - Does domain.com and www.domain.com both return status 200? - Does site:domain.com return a bunch of supplementals with similar title/description snippets? - How many legit sites returned for a linkdomain: query? - Do internal pages link to /index.html instead of domain.com/? - Would Google pick up the right description snippet even if you removed META description tags? - Are your meta description tags shorter than 50 chars? - Um.. does your site validate? - How much content is there on each page? (an article vs product listings) - Is your content easy to find when I scan the source code? Or will I have to dig through OPTION, JS, CSS, TR and all sorts of other stuff to find your opening paragraph, if any? - If I run a "snippet" search, will I see more than one listing? - Does your meta description tags read like they were machine generated? - How huge is your meta keywords tag? - Who do you link out to, and why?
A simpler variant on EGOL's suggestion on using Excel is to use Notepad. I'm typically working on a 1024 pixel width screen. I reduce the website under study to a 800 pixel gross width window and put that at the left. I open a Notepad window in the space on the right.
I jot down all my thoughts just as they come to mind. Then when I've finished my exploration, I cut and paste to group the thoughts more logically. Usually I've then got the text already to hand that I can pass on to whoever is interested. Notepad or some variant of that (I actually use Metapad) is the software that gets the most usage on my machine. That's partly because my hand writing is unreadable. :(
I start with the obvious things like titles and descriptions, and then see if I can establish who the site is aimed at -- which should be clear if the site is done really well. Then I start looking at the page content and design to see if it appears to be appropriate and keyword-rich -- without being repetative.
I also look for good use of headlines and whether or not semantic markup is used.
I also look at internal body copy linking and home page content and how it leads the visitors (and spiders) through the site. I really hate when people get happy with bold and italics. I just think that if it's that important you should be linking people to more information.
I believe in optimizing for people first, since they are the ones you are really trying to reach.
My best advice is to study reverse engineering, and find out the methods that suit you best.
And score cards (excel, web-applications, whatever suits you)are the way to go with analysis. They give you uniform way to measure site efficiency compared to another (and build an extensive database for further use like reporting (or web domination...evil laughter... ;)
index cards remind me of undergrad
for fun, you can practice self-visualization. Visualize yourself as the bot spidering the web and coming to site a, and then site b. What do you see and index on site a, as compared to site b? What don't you see because its Flash, JS or simply omitted...
honestly, a hot cup of French-roast coffee in the morning helps me.
That's good advice. (The self-visualization, not the coffee. I don't drink coffee.) WWABS? What Would a Bot See?
Ian McAnerin's SEO Browser is pretty useful for WWABS investigations - https://www.seo-browser.com
Rick, i prefer Italian Cappucino...
Its really the best. and to fill up constantly all day with espresso haha
Some really good pointers.
Michael's list is very insightful. I just miss the observation which says:
If .... then the site is optimized for September 2006:)
There are not many I can add, well one maybe.
Do they use flash or rather, is the whole site Flash? Then it will probably take a whole lot of work, starting with convincing the client that non-flash websites are still good.
[Typically a Chinese website has to be built in Flash and depending on the boss the "skip" button will or will not be included.]
What I do if I have to optimize a flash website (and we do have a couple in our company) is create a bunch of html landingpages. Use different keywords for different landingpages.
There's actually a variety of new options for Flash, beyond landing pages. My Flash designer is so determined to convert me to his corner that he's doing tons of research into ways to make Flash more search friendly.
A lot of the new options are related to Macromedia (cum Adobe)'s attempts to make Flash more accessible. The two serve each other quite nicely.
Flash does make for great video embedding, i.e. YouTube and for excellent online presentations - they may not be search friendly per se, but they're great linkbait - multimedia content (when done right) always has been.
Like rickm, I usually try to visualize myself as a bot crawling the website in order to evaluate it. As this is not always easy, I generally perform a crawl (using a tool originally intended to make Google sitemaps) and examine the results. I can then spot many different issues such as: - similar title and meta tags between pages - maximum size of pages - pages with similar content and different URLs
Last Tuesday, I crawled a 500+ pages badly designed website using Joomla! I had to stop the crawl as I kept discovering new pages. I had 32000 of them with the same pages appearing 80+ times. This was not a session ID issue. This kind of website certainly poses the same problem to both MSN and Yahoo as they have indexed very few pages of it.
I find that using a crawl simulation can help me spot a number of problems, especially on dynamic websites based on CMS or e-commerce software.
Excel instead of the cards.