By now, most of us have gotten around to doing testing of some sort on our websites, but testing specifically for SEO can be extremely difficult and requires extra vigilance. In today's Whiteboard Friday, Rand explains three major things we need to think about when performing these tests, and offers up several ideas for experiments we all can run!
For reference, here's a still of this week's whiteboard!
Video transcription
Howdy Moz fans, and welcome to another edition of Whiteboard Friday. Today we're going to talk about running some SEO tests. So it is the case that many of us in the SEO world, and in the web marketing world overall, love to run tests on our websites. Of course, there's great software, like Unbounce or Optimizely, if you want to run conversion-style tests, tests that kind of determine whether users perform better through conversion Funnel X or Funnel Y, or if Title X or Title Y convinces more people to buy. But SEO tests are particularly insidiously challenging because there are so many components and variables that can go into a test.
So let's say, for example, that I've got this recipes website, and I have my page on spaghetti carbonara, which is one of my favorite spaghetti dishes. Geraldine, my wife, makes a phenomenal carbonara as she grew up Italian.
So here we go. We've got our ingredients list. There's a photo. There are the steps. What if I'm thinking to myself: Gosh, you know, people on this webpage might want to check out other pasta recipes from here. I wonder if, by linking to other pasta recipes, I can get more people exposed to that and get them visiting that, but not just that. I wonder if I can send some extra link juice and link value, link equity over to these other pasta recipe pages that might not be as popular as my spaghetti carbonara page. Can I help them rank better by linking to them from this module here? Should I be putting this on lots of my pages?
So we might think: Well, I could easily put this on here and figure out user metrics. But how do I determine whether it's good or bad for SEO? That is a really, really challenging problem. It is.
So I wanted to take some time and not say this is a foolproof methodology, but rather here are some big things to think about that those of us who have done this a lot in the SEO world, run a lot of experiments, have seen challenges around and have found some solutions. This isn't going to be perfect. It's not a checklist to tell you everything you need to know about, and I'm sure there will be some additional items in the comments. But try these things at the very least.
#1 Experiments need control groups
What's unfortunate about the SEO world is we can't just do something like, "Hey, let's add this to all of our recipe pages and see how it does over the next month." You don't know whether that's going to help, even if you could prove to yourself that the user and usage tricks look a little bit better, but you're not sure about the ranking impact.
Well, gosh, you roll it out on every page, and you say, "Hey, over the last month things have gotten better. Now we know for sure that adding modules that interlink between our web pages is always better. Let's do that across lots of things." That's not an accurate conclusion to come to, and that's why we need a control group.
It could the case in the SEO world that maybe these pages are getting more links all of a sudden, maybe your domain authority has risen, maybe some of your competitors have done some bad stuff, and they have fallen in the rankings. It's just too hard to say. There are just too many inputs going in, and that's why this control group is so essential.
So I might take a group of pages and say those pages get the module, and at the same time these other pages don't get the module. Now if we have something like a rising domain authority or a bunch of competitors falling out, it will be fine, because we'll still see how the group with the module performed against the group without the module. If they both rise in the rankings, we can say reasonably that, "Well, this didn't appear to do enough to change the ranking. So if the user metrics are good, let's keep it, and if the user metrics are bad, let's not keep it, because ranking wise it seems like a wash." But if we observe differences in these two groups, assuming that there are no other differences in those groups, we can be reasonably assured that it was this that helped them rank better.
Now, be careful here. If I were doing this experiment, what I would want to make sure is that I didn't add the module to all of my recipe pages or to some types of recipe pages and not others. I would want to make sure that it is all pasta recipe pages, pages with people, visitors, metrics that are as close as possible to each other so that the control and the test group are as similar as possible.
By the way, when you're doing this, you also want to find a suitable target. What I mean by suitable target is I really like things for SEO experiments where I'm paying attention to the rankings in particular, I like search results with very low outside activity. Meaning, let's say spaghetti carbonara was one of them, I would watch the search results for spaghetti carbonara for a couple of weeks, and if I saw a lot of movement, my page bouncing around in the rankings, other people's pages bouncing around in the rankings, I wouldn't use it. I'd go to a much less active SERP where churn in the search results and movement in the search result was likely to be very low. That's where I love running experiments.
I'd also look for low competition that tends to go with low churn, and I'd try and find pages where I rank between number 8 and number 30. You might say, "Well, why do you care about ranking in number 8 to 30?" Well, I don't like ranking way at the tail end of the search results because any little thing, if you're ranking page 5 and result number 62 or something, hey man, any little thing could move you up 10 positions, 20 positions. Churn and movement that far back in the search results is much, much higher.
It's also the case that I don't love ranking number 1, 2, 3, 4, or 5, because it can be really hard to move results. You might need, gosh, without a ton of external links with anchor tags, blah, blah, blah, I'm not going to move 1 or 2 positions from number 3 or 4. This is why I like something between 8 and 30. That's what I mean by finding a suitable test result, and your selection may vary on this.
#2 Every test should be repeated multiple times
Every test should be repeatable and repeated multiple times. Preferably, if you can, you actually want to turn the test on and off for the same kinds of pages. This gets tough. Now, the reason you want to do the multiple tests is because you want to be assured, confident that it was what you changed that did it.
So after checking this on pasta recipes and seeing that, hey, my recipe site is getting better, the metrics looks good, the rankings are rising a few results each time that I put this on different pages, I feel confident that we can move ahead with this, great. Now run it on your dinner recipe pages or your risotto recipe pages, something similar to pasta. Run it on your salad recipe pages. If you repeat it on your risotto pages and your salad pages, and you're getting the same results each time, now you can feel pretty confident that probably it was the case that this module was, in fact, the impacting factor that moved the needle. If you just do it once on one set of results, it's much harder to say that with any kind of confidence.
With the turning on and off bit, so what I would want to do, let's say we have my group of pasta recipe pages that get the module. What if I take it off there? Will I see them fall back down in the results? The answer is kind of, well, hopefully I would because then I could be more sure that this was happening. In SEO though, this is really hard. In fact, the search engines make this kind of frustratingly impossible for a lot of link-based stuff, and the reason is something called the ghost effect.
I will probably do a Whiteboard Friday in the future on the ghost link effect. We've been testing this quite a bit with a project that I'm involved in called iMac Lab and here at Moz as well. We've seen people over the years report this. Essentially, you point a link to a page, and you see that page go up in the rankings, which makes sense. The link is helping it rank better. You remove the link and the page takes weeks, sometimes even months to fall back down. Google knows the link is gone. They've re-indexed that page. Why isn't the page that it helped rank falling right back down?
The answer is this ghost effect.
So ghost effects seem to be a real thing that Google really does around links that used to point somewhere, and so it makes testing with link-based stuff really hard. That's why you want to do the multiple times. That's why you want to do the control group and to test it in multiple different sections as opposed to relying on, "Well, I turned it on and it did this. I turned it off, and it went back to the original. So I know that that happened." Ghost effect will prevent you from observing that.
#3 Rankings have to be part of the test, but they can't be the only part
In fact, I would argue that if they are the only part of your test, you might do some things that could actually mess you up in the long run and mess you up with your users.
So the two other things that I really look at are, number one, how do users perform, user experience, and that can be everything from are my browse rate and my visits and traffic sources rate, are those staying relatively similar to the patterns that I've seen in the past? Traffic performance, I want to see that stay relatively stable or improve. If both of those things are improving, hey, maybe you have a real winner on your hands. With a lot of these tests, that could happen. You might see that more people are clicking on those. Maybe more people are liking different pasta recipes. Linking to those, that's helping you all across the board. Wonderful, wonderful.
Then I'm looking at rankings as well. Weirdly, even though I'm a hardcore SEO guy and I love SEO, I think rankings are the least important of these three. If I see something perform well for my users and I see my traffic improving, it doesn't matter too much what I see going on with my rankings. In fact, usually it's only when there's not much delta in these two, and the rankings performance is the only indicator that things are getting better that I would care about that deeply. When you're watching rankings performance, always, of course, watch logged out, non-personalized, non geo-
biased results. It's relatively easy with something like recipes, but could be very hard with something that's in the local world or that has local indicators in it.
Now for some example tests!
Okay. Now you've got these one, two, three. What are some interesting tests that you might actually want to run? Well, these are some of the ones that we've run here, or I've seen other companies run and observed interesting results. So making titles more narrative versus more keyword driven. I've seen this test performed. I've actually seen this positively and negatively performed. I think people who did the more narrative sort of click-baity style titles, and that hurt their rankings, hurt their traffic, and I've seen people improve with it as well.
Adding or removing links or blocks of links, it might surprise you to learn that I've seen people remove blocks of links like this and perform better. They find that user metrics stay the same or even improve a little, because people aren't dragged off to other sections, or maybe it helps make the content stand out better and the search engines seems to like it too. I think in particular, Google might be looking at some of that Panda-style stuff and saying, "Man, this chunky block of unrelated links or of links that no one is clicking or of links that look keyword stuffed, we don't like that." In particular, I've seen people remove links from their footer and get better rankings.
Adding or removing social or comment buttons and share accounts. So I've seen folks have share on Twitter, share on Facebook. Here's how many likes it has. I've had people say, "Comment on this", "Add a comment or don't add a comment", and those have actually moved the needle. The comment one is particularly fascinating. I've seen people remove comments and perform better, I think oftentimes because zero comments is sort of a negative psychological indicator for a lot of folks. So people don't share things that have zero shares or zero comments. But if they see the button to share something socially and no comments, because comments aren't allowed, sometimes that actually improves things. So interesting.
I've seen adding descriptive content, images, and videos help and hurt rankings at times. Sometimes people take a big block of text, they think I need more good unique content to rank this page. They shove it on the page, and the performance stays the same or goes down. I've seen people say,
"Hey, we need more good, unique content on this page." They write something really compelling, put it on there, and it helps the rankings results.
This is why these tests exist. This is why we have these kinds of principles of some testing in the SEO world. With that, hopefully, you'll run some fantastic tests of your own, learn something amazing, improve the performance of your site and rankings.
And we'll see you again next week for another edition of Whiteboard Friday. Take care.
Testing is the quintessential nature of SEOs, or so should it be. That's why every SEO should run a personal site, not just for personal branding purposes, but also for testing.
"Playing" with a personal site, maybe something about an interest you have (if not really a personal one) is a safer way to test things that you will eventually apply on your clients' site or propose to your company if you are in-house.
Especially if you're in-house, the ideal should be creating a test site similar in nature to your company's one: an eCommerce if you are working for a retail company, an affiliate tourism portal if you are in-house in that niche, a niche news/blog site et al. Again, the purpose of those kind of sites should be having the freedom to test, but - IMHO - it is important having a basic objective commanding all tests, so to not make them just purely scientific ones (don't forget ever your "marketer" nature). This objective, therefore, should be a business related one (better revenues, more subscribers, increased downloads...).
This may sound obvious, but it is not so... as I see people testing just for testing without really asking themselves an ultimate why apart the pure pleasure of discovering something about how SE work.
And, well, if you are able to earn some extra money with your test site... better, no?
Absolutely right - or a number of sites to test various things out. Saying that it's really difficult to test SEO and I don't really buy the idea of having a control group of pages on a site - just by altering the link architecture in one area of a site you are altering the whole site's link flow.
I agree with you Gianluca, I've found it extremely beneficial as an in-house marketer to start a personal website (even if it just makes you look good).
In a lot of cases, there isn't a chance to trial in SEO practices, such as testing, compared to an agency environment. That is unless you visit websites like this one!
I would add that the nature of your digital project should reflect a specific core skill you want to develop alongside discovering something about SE work. In my case that is writing, specifically long format informative articles. Choose your skill and test with that too!
I agree completely that anyone interested in doing SEO should start with and run a hobby site that isn't mission critical, and where they can do things like try out new plugins and other software.
Testing for purposes like improving conversions is a good idea, too.
Trying to run one site to test SEO on multiple sites might not be a good idea, though.
Google's Big Daddy update, which was an infrastructure update, made search more modular, much like the steps that Bing was taking around the same time.
Chances are very good that Google has different ranking algorithms for different genres or classifications of sites and different thresholds in algorithms for the same genres. So a site based upon making travel accommodations may use very different ranking signals and algorithms than one specializing in celebrity news. So Expedia and Perez Hilton may find their SEO and the strategies they use, as well as the tactics they incorporate might be very different. For example, the celebrity news site probably finds freshness plays a lot stronger role in how well blog posts they make rank for something, and since they are a news site, the quality of the sources they link to might also play a strong role, the number of entities they mention in a post might make them stand out as a representative site in a cluster of stories about the same topic, too. These types of factors probably don't matter at all on Expedia.
I agree to get a baseline of traffic and on your personal or pure test site is very wise then waiting for Google to come and pass its judgment is really the only way to find out what is going to happen.
Deep crawl, screaming frog Seo spider, Keylime toolbox and complete backups preferred done prior to every change. Your hosting service may offer this if not I suggest a tool like Sucuri, VaultPress or Code Guard.
This will allow for you to see what you have done and if it affected you positively or negatively making large changes that at one time are always risky. However having logs of what you have done is invaluable
Bill brings up a fantastic point if you are a hobby is finding out with Lindsay Lohan's ankle bracelet colors this week then you might not get the same results if your trying to do Internet marketing or international business.
Google never fails to teach us no matter how much it hurts sometimes.
Exactly. Also, while freshness still might matter to a site like Expedia, especially since accurate timeliness of prices might be important, for pages that quote and rank for text within the Declaration of Independence of the US, or the body of the Magna Carta, freshness likely isn't a very strong ranking signal at all. People interested in the latest colors of Lindsay Lohan's ankle bracelet may bemoan the loss of real time search results at Google, though.
... Gianluca Fiorelli as allways a thumb up collector. You are right and I just want to give a thumb up. But as allways I can tell you a little inside view into german SEO.
Like AngelaMerkel told us a while ago - internet is #neuland wich means it is new for us - thats a bit funny in 2014. But SEO seems to be pretty #newland for us - because all these SEO tactics we would see as: old and not usefull is often used. Bought links, bad directorys, just get the link, comment spam... I know several sites with a link profile like this. And AB Tests? Her is what we do in general:
In general we don't do A-B Tests. Yeah right, there are SEOs maybe more than a handful wich do AB Tests and several webmasters (wich call themselfe SEOs) wich don't use it at all. They still buy Links on ebay and thats all. #Shame but we are #Weltmeister in other games :-)
I get 20 mails a month from "SEOs" wich want to sell Links to me, or buy links from me. In the case of selling - I ask what they do, what these Links would give to me - what do there Tests say about it? And they say: We don't do Tests - Links help - no matter what link.. trust me...
This is a great WBF Rand.
One of my favorite topics.
The control group is such an important thing to take into account, I call it "sterilized environment", but the idea is the same.
I think that I'm lucky because we have 21 language editions for the same site and sometimes we run the same experiment in all of them at the same time. As all sites have completely different environments, once you see a pattern in them all -- it’s much easier to rely on the results.
The ghost effect you mentioned is very interesting to me. In 2008 we got a great link to one of our European editions from one of the largest newspapers in that country. Within a single week we jumped from the 4th to the 1st position for one of our top keywords. 15 months later the link was removed but 5 years later we are still on the same position.
So yes, the "environment" has changed and other metrics were increased but there's no doubt that the ghost effect is there.
And one last thing about the rankings check (logged out, non-personalized, non geo-biased results) - I always recommend using 3rd party tools for that. Even for non-local queries, you guys in Seattle and me in Tel Aviv will often see completely different results. Years ago I used to log out and clean my cookies before every rankings check, but today I only look at our rankings over time in my rankings software which does it all + for all countries.
Thanks for the video!
I'm glad someone brought up the rankings check, because I'm not sure how to look at logged-out, non-personalized, non-geo-biased results. What are some software titles that you recommend? Are there any free online tools?
If someone could point me in the right direction, I'd appreciate it.
Hi Igal,
How can you be so certain that the same algorithms have be adopted for each of the different language types?
Hi Bill,
experience taught me that common algorithms/filters works almost identically in every country. Panda or Penguin are an example. The only difference is the roll out timing.
Then, I saw how things executed in one country tends to work also in others.
The biggest gap I see is how Google, though, struggles with semantics and geotargeting, despite of all the things added for solving those issues. An example is seeing how many site specifically meant for targeting one country (i.e.: Venezuela) may outrank Spanish site in Google.es
On the other hand, it could be possible that for specific case Google applies certain algorithm just in certain countries and not in others. The most outstanding example is EMD, which was never rolled in latin-speaking countries. Or the pay-roll update, which doesn't having had any effect (again) in the latin-speaking Googles.
Thanks, Gianluca
That's one of the things I don't see enough discussion of - how different algorithms might potentially work from one place to another. Here's another one - Does Google compound decompounded words in queries when helpful to find matching sites within searches in Germany - it doesn't seem to in the US.
I'd love to see more experimentation on it - I'm not against experimentation at all, even if I may seem to be questioning it.
I'm doing that because a lot of people want SEO to be simple, but sometimes it is just complex regardless of their desire for simplicity.
Rand, I have a question. Do you think that traffic of a particular site needs to be at a certain level before the tests can be considered valid? Some of our clients are just starting out on their SEO journey and may have under 50 sessions on their websites in a given day.
Really great WBF!
One of my biggest takeaways is that there’s no set formula for what’s going to make a site successful in organic search (re: how you’ve seen improvements & losses in rankings from common on-page SEO techniques). Every site requires a unique approach and a healthy balance between what will achieve better visibility in the SERPs & what’s best for the user. Unfortunately, the uniqueness of every site makes testing difficult – I frequently run tests on personal, internal, or test-only sites, but what I learn from those is not always applicable to client sites (agency setting here!). And if I ran an experiment more relevant to the client on their live site, I would have a hard time ‘reversing’ something that I believe created positive results.
Also - really looking forward to your WBF on ghost effects and what role (if any) disavowing links has on this!
Great test to run: blog on a regular basis for a while and see what (stable) ranking you can achieve on the general term of your blog e.g. ("Internet Marketing Blog") Then stop posting new content (should not be to hard) and wait for your blog to drop. If you have done that, start blogging with half the rate as before and see how fast you come up again. Or blog juist one item and just wait.
Especialy if you are in it for the long run, you can determine the minimal and optimal rate of bloging.
This might vary by market and position you want to reach.
for my website after some blogs i can leave it for about 2-3 months before i drop out of top 10 if i do not post anything. Back within 1 to 3 posts.
A competitor has blogged 38 times in 18 months in a specific "high value keyword" and got to spot 5 in SERPS - with very low page and domain authority. He has lowered his frequency lately and he is already losing spots. :-)
Last point is solid. I think too that ranking is the least important factor when you are receiving good visits. It was a really helping material. Currently I was searching the Keyword ratio in a page content. Is there any other to help? I think that is also the part of SEO analysis.
This post really points how where to test and how to test it. I think many times people pick the higher ranking pages and try to nudge them up a position instead of picking a 3 or 4 page result and then testing there to really see the results as Rand depicted. Another area that is hit on here is picking the same topic like here with "Pasta", and not jumping around expecting to see the results. In other words same topic different tests. Last being knowledgeable about the ghost effects are so vital and have been a head scratching moment for me in the past great highlight there. I took a lot away from this WBF this week and cannot thank you enough, and awesome point not only caring about ranking performance and focus more on marketing research concepts.
Rand I love the ideas one thing though first thing first I run a full backup of my site prior to making any experimental changes or serious changes of any manner. Outstanding WBF!
Rand,
Day by day, SEO is dramatically changing in its way. so, It's always good to do experiments or test to get better understanding about specific result even you don't need to rely on people to get better strategy. As I've just spotted here, you've been testing influence ranking by influencing click-through rate. so, what result did you get from it?
In Between, The post is really inspirational for SEO folks to run such kind of experiments, Thumbs up for the post.! :)
Great post Rand!
I don't think there's enough material like this shared in the SEO community covering correct testing methods which can have a positive effect on performance and also contribute to improving the image of SEO within other industries and sectors. I've got a question regarding some of the tests that you conduct with IMEC. I know you've discussed recent findings on the current impact of anchor text on rankings. How do you legitimately test ranking impact that comes from the elements of an inbound link from an external site? Can you test inbound linking factors without resorting to getting hold of the kind of links that you really don't want pointing at your site?
Liam
Very interesting topic this week Rand, I have a set of pages that are grouped together by geographical location which in this case are by London Boroughs, some are quite historical and some new pieces. I'm going to give your approach ago and see what kind of results we get.
David
In SEO there are 2 most imporatnt part: First one is Targeted Traffic and Second one is rankings. In the testing part, you have described very well about both these two things, very very effectively well written.. Good one Rand
Hi Rand,
In terms of testing, I recently saw your views on "Page Fight Webinar At Un bounce" and completely agree to that aspect. If we want to test the optimal design for a better UX, we need yo do a lot of experiment. The page winner in that webinar was "Bed Bugs Solution" web page and it clearly depicted the paths which a customer might be looking for while scrolling through a web page. Testing is quintessential in order to find the break even point of performance and lay a benchmark too.
This clears up a lot in terms of process Rand. Thanks for sharing. It does leave one burning question however - does your wife use any cream at all in her Cabonara?
Hi Rand!
Thanks for some great tips on test changes to a website. I would like to emphasize on the point that it is very important to list down differences between the groups that have been picked for running the tests. And for this i would suggest not going in for test websites that provide free lancers shortlisted by the site for your test, rather go for websites that allow you to select your audience depending open various factors. This can help manage the results as you would know what differences to keep in mind while drafting a conclusion. Ofcourse we need to test multiple times before coming onto a final verdict.
Thanks.
Really Great stuff for Testing, A SEO professional works depends upon the testing. Thanks for the Great work Rand.
I definitely like the idea of SEO test should be repeated numerous times. How often though and under what circumstances? There are important dissimilarities to take into consideration when it comes to running repeated tests like time, the number of tryouts, etc. - but I guess that's where the control group concept comes into consideration when analyzing composite conditions.
Interesting bit about the "Ghost Effect." Does anyone think the "Ghost Effect" might have a relationship to the fact that, when we look in Google Webmaster Tools, we often see backlinks that were removed quite a while ago?
Hello and thank you for another awesome/fun post! For me, sometimes the videos here do not stream great and end up stopping and going to a point that I can't watch :( Is this just my internet connection, or can the videos be streamed more efficiently??
Thank you!!
We generally haven't had complaints about the videos. Does that happen to you in multiple browsers? Are you in a location with limited bandwidth?
Great video Rand, I look forward to hearing more on the ghost effect....
Great post, as always with whiteboard friday. Thanks Rand.
I have a very basic question, for how long would you recommend to run the test before to consider it valid? With conversion rating A/B test it's easy, just a matter of mathematic and statistical significance, but where there's a delay, before to do that you need to give google a reasonable time frame to notice the changes and reflect them on SERP.
For how long would you recommend running a test before to consider the result conclusive?
Rand thank you for the video.Testing should be done systematically according to my opinion. We should not expect every time a year is spent to assess, analyze the numbers. I will retain the "three aspects of experiments we all keep in mind shoulds assurer to valid results".
Just caught up with this WBF. Really interesting topic Rand!
SEO testing on a client's site can be pretty risky, and like Gianluca said using a personal site for this first is a great way to do this with minimal consequence. I definitely agree with your point on the effect 0 comments has on your content's sharability - I for one get a little suspicious if I see a post that has a fair amount of likes, tweets etc. but has had literally no interaction in the way of comments! I guess removing comments from posts that do get other social engagement could create the illusion that the place would be flooded with comments, had they been allowed?
OMG! how I missed this post to read. Damn good post from Rand. I used to experiments with my personal websites rather than my corporate sites. Testing would be really useful and decide to implement new ideas and strategies for my other huge traffic sites. Though nice and detailed explaining about the experiment of rankings and traffic. Again a good stuff Rand :)
very nice wbf, tnx! I'd love to see the ghost effect explained more in details once .. btw, great shirt :)
Fantastic post regarding testing part.
Testing SEO changes can often be a tricky job to do. Using control groups is propably the best idea, but can be a very time consuming process.
Thank you for a great WBF!
Great food and think work for the day ...it is the repeated tests that are telling.
Hey Rand,
Another great learning session. You highlighted the important aspect of SEO professional and I'd say very few people in our industry deeply analyze the ranking patterns and conducts such high level tests.
Being the SEOs, we should have to build this ability to think outside the box and start experimenting/playing with real stuff. That's the reason, it's become advisable that one should have some sites to run such experiments. But, what happen is people give up at the initial level and started to use those sites for crappy linking or paid purposes.
What do you think how someone can stick to this thing? Is it can be developed or it's natural?
Umar
My answer is that it can be trained... But if you aren't very curious by nature you will stop testing after a while
Exactly I thought so :)
A superb #wbf section @Rand. A very interesting piece of information one got to say especially the testing portion which says traffic and user experience are as important as rankings.
Another great WBF Rand! Another indicator I watch for when it comes to #1 and #2 user performance and traffic performance, is the traffic relevant and will it have an opportunity to convert. I've seen efforts result in a boost of traffic but non relevant traffic, so that in the end the client is still not happy, even though the traffic does tend to still help have SEO benefits. ROI is still the final indicator of a job well done or not.
Just finished up my 3rd conversation today about something just like this. Great stuff Rand, and even more stoked about being on the same brain waves!
PS - Enjoy the spaghetti carbonara
Your pal,
Chenzo
I did Google ghost effect several times with some variations but no luck finding the answer. What is it? Can't wait for weeks so can anyone answer it?
I see Rand hairs getting white? Isn't it one of the biggest sign of getting older?
Yet, again learnt something new.
Now I should learn how to measure that results!
I agree with you Gianluca, I've found it extremely beneficial as an in-house marketer to start a personal website (even if it just makes you look good).
Based on your findings would you say that Google is looking for a cleaner looking website. More focused around the content, or is it inconclusive?
I loved it, when i read your post twice :) Thanks for the useful share!
Grand.