Editor's note: Today we're featuring back-to-back episodes of Whiteboard Friday from our friends at Stone Temple Consulting. Make sure to also check out the second episode, "UX, Content Quality, and SEO" from Eric Enge.
Like many other areas of marketing, SEO incorporates elements of science. It becomes problematic for everyone, though, when theories that haven't been the subject of real scientific rigor are passed off as proven facts. In today's Whiteboard Friday, Stone Temple Consulting's Mark Traphagen is here to teach us a thing or two about the scientific method and how it can be applied to our day-to-day work.
Video transcription
Howdy, Mozzers. Mark Traphagen from Stone Temple Consulting here today to share with you how to become a better SEO scientist. We know that SEO is a science in a lot of ways, and everything I'm going to say today applies not only to SEO, but testing things like your AdWords, how does that work, quality scores. There's a lot of different applications you can make in marketing, but we'll focus on the SEO world because that's where we do a lot of testing. What I want to talk to you about today is how that really is a science and how we need to bring better science in it to get better results.
The reason is in astrophysics, things like that we know there's something that they're talking about these days called dark matter, and dark matter is something that we know it's there. It's pretty much accepted that it's there. We can't see it. We can't measure it directly. We don't even know what it is. We can't even imagine what it is yet, and yet we know it's there because we see its effect on things like gravity and mass. Its effects are everywhere. And that's a lot like search engines, isn't it? It's like Google or Bing. We see the effects, but we don't see inside the machine. We don't know exactly what's happening in there.
So what do we do? We do experiments. We do tests to try to figure that out, to see the effects, and from the effects outside we can make better guesses about what's going on inside and do a better job of giving those search engines what they need to connect us with our customers and prospects. That's the goal in the end.
Now, the problem is there's a lot of testing going on out there, a lot of experiments that maybe aren't being run very well. They're not being run according to scientific principles that have been proven over centuries to get the best possible results.
Basic data science in 10 steps
So today I want to give you just very quickly 10 basic things that a real scientist goes through on their way to trying to give you better data. Let's see what we can do with those in our SEO testing in the future.
So let's start with number one. You've got to start with a hypothesis. Your hypothesis is the question that you want to solve. You always start with that, a good question in mind, and it's got to be relatively narrow. You've got to narrow it down to something very specific. Something like how does time on page effect rankings, that's pretty narrow. That's very specific. That's a good question. Might be able to test that. But something like how do social signals effect rankings, that's too broad. You've got to narrow it down. Get it down to one simple question.
Then you choose a variable that you're going to test. Out of all the things that you could do, that you could play with or you could tweak, you should choose one thing or at least a very few things that you're going to tweak and say, "When we tweak this, when we change this, when we do this one thing, what happens? Does it change anything out there in the world that we are looking at?" That's the variable.
The next step is to set a sample group. Where are you going to gather the data from? Where is it going to come from? That's the world that you're working in here. Out of all the possible data that's out there, where are you going to gather your data and how much? That's the small circle within the big circle. Now even though it's smaller, you're probably not going to get all the data in the world. You're not going to scrape every search ranking that's possible or visit every URL.
You've got to ask yourself, "Is it large enough that we're at least going to get some validity?" If I wanted to find out what is the typical person in Seattle and I might walk through just one part of the Moz offices here, I'd get some kind of view. But is that a typical, average person from Seattle? I've been around here at Moz. Probably not. But this was large enough.
Also, it should be randomized as much as possible. Again, going back to that example, if I just stayed here within the walls of Moz and do research about Mozzers, I'd learn a lot about what Mozzers do, what Mozzers think, how they behave. But that may or may not be applicable to the larger world outside, so you randomized.
We want to control. So we've got our sample group. If possible, it's always good to have another sample group that you don't do anything to. You do not manipulate the variable in that group. Now, why do you have that? You have that so that you can say, to some extent, if we saw a change when we manipulated our variable and we did not see it in the control group, the same thing didn't happen, more likely it's not just part of the natural things that happen in the world or in the search engine.
If possible, even better you want to make that what scientists call double blind, which means that even you the experimenter don't know who that control group is out of all the SERPs that you're looking at or whatever it is. As careful as you might be and honest as you might be, you can end up manipulating the results if you know who is who within the test group? It's not going to apply to every test that we do in SEO, but a good thing to have in mind as you work on that.
Next, very quickly, duration. How long does it have to be? Is there sufficient time? If you're just testing like if I share a URL to Google +, how quickly does it get indexed in the SERPs, you might only need a day on that because typically it takes less than a day in that case. But if you're looking at seasonality effects, you might need to go over several years to get a good test on that.
Let's move to the second group here. The sixth thing keep a clean lab. Now what that means is try as much as possible to keep anything that might be dirtying your results, any kind of variables creeping in that you didn't want to have in the test. Hard to do, especially in what we're testing, but do the best you can to keep out the dirt.
Manipulate only one variable. Out of all the things that you could tweak or change choose one thing or a very small set of things. That will give more accuracy to your test. The more variables that you change, the more other effects and inner effects that are going to happen that you may not be accounting for and are going to muddy your results.
Make sure you have statistical validity when you go to analyze those results. Now that's beyond the scope of this little talk, but you can read up on that. Or even better, if you are able to, hire somebody or work with somebody who is a trained data scientist or has training in statistics so they can look at your evaluation and say the correlations or whatever you're seeing, "Does it have a statistical significance?" Very important.
Transparency. As much as possible, share with the world your data set, your full results, your methodology. What did you do? How did you set up the study? That's going to be important to our last step here, which is replication and falsification, one of the most important parts of any scientific process.
So what you want to invite is, hey we did this study. We did this test. Here's what we found. Here's how we did it. Here's the data. If other people ask the same question again and run the same kind of test, do they get the same results? Somebody runs it again, do they get the same results? Even better, if you have some people out there who say, "I don't think you're right about that because I think you missed this, and I'm going to throw this in and see what happens," aha they falsify. That might make you feel like you failed, but it's success because in the end what are we after? We're after the truth about what really works.
Think about your next test, your next experiment that you do. How can you apply these 10 principles to do better testing, get better results, and have better marketing? Thanks.
Excellent WBF, Mark & so good to see you back here,
Firstly, thanks a ton for sharing your insights - SEOs need to approach their trade and profession as a science. And to those who say SEO is simple and takes 5 mins to learn. I say hogwash!
From your list of 10, I'd also add #11 - Equipment. SEOs need to make sure that the tests they carry out use a variety of electronic equipment, including high-spec PC's, tablets, mobiles and smartphones. I know this sounds obvious, but many just test on desktop, not the others ;)
Below are a few additional tools / posts that may be useful to viewers, when testing and analysing SEO theories, causations and correlations:
SERP Volatility Tools
Forums & Blogs
Google Update History
Google Patents
Alternative Search Engines (not Google) - great for comparisons!
Hope the above provide SEOs with a little more beef to their roast :)
Thanks for that great list, Tony! Obviously you can only do so much in an under-10 minute video, so I appreciate your addition here.
You're welcome Mark :) Hopefully the extra links will compliment your much-needed WBF. But I'd be happy to petition Moz to having you present for an hour ;)
Great list of tools! Thanks Tony
You're welcome Cynthia. Hopefully they'll help SEO's keep up-to-speed with Google's (and other search engine's) shenanigans :)
Great Thanks Tony for adding value to the article.
That's a good list Tony, we cal also add some tools like Moz, Ahrefs, Link Detox for the link analysis to it.
Thank you for the useful tool list.
Thanks Tony! I think the Patent docs are some of the most overlooked 'important' resources an SEO should have in their toolbox.
Absolutely Jason. Smart SEOs plan for the possible future, thanks to people like Bill & Barbara :)
My own background is in the sciences, so I was very happy to see information about how to correctly approach the science of our work. [My particular marketing pet peeve is when someone looks at two landing pages and says. "This one got three orders and that one only got two, so the first one is clearly the winner."] Thank you for sharing!
Oy tell me about it, Linda! Making conclusions from too small a sample size, or from one-off incidents ("I did X on Tuesday, and on Wednesday my ranking went up by three. Must have been caused by X!") are the worst sins.
Finally, we are talking about the science in SEO. Thank you very much, Mr Mark Traphagen. I am so glad you mentioned "Statistical Validity" - The degree to which an observed result, such as a difference between two measurements, can be relied upon and not attributed to random error in sampling or in measurement - in your presentation.
Far too many SEO's are drawing conclusions based on (the way I call them) "paper" experiments. Sounds good on paper, but shows different results in the real world. You are absolutely right - "we don't see inside the (Google) machine". So when talking about statistical validity we should also talk about "Reliability".
Reliability as a degree to which a test consistently measures whatever it measures.
Measuring once is not enough. Test and re-test over time.
In a way, errors of measurement that affect reliability are random errors.
Errors of measurement that affect validity are systematic or constant errors. And those cost dearly to many SEO's around the globe.
Well said, Omi. Perhaps even worse is how many conclusions we see out there that aren't even based on an attempt to do scientific testing, but rather just on anecdotal experience, where someone has seen something happen and then publishes it as a conclusion: "X effects Y!"
What Mark said, Omi :)
As you say, measuring on a regular basis is a fundamental part of SEO, considering how often Google updates their algos.
And contrary to what Google have said (H/T Lisa Barone for her response), SEOs shouldn't test ideas on their big money sites ;)
*cries tears of joy*
Thank you. The vast majority of marketing "research" is so pathetically bad. There is so much that can be expanded upon with this (obviously not within a 10 minute video though).
I think in terms of the sample group, size is less important as demographic representation. If you measure a group of 500 cis white males from a single neighborhood in Seattle and a group of 100 people of all different genders, races, and cultures from all different parts of Seattle, you're likely to be able to generalize more with the latter group despite that it is a smaller sample -- however like you mention, randomization is the best way to get the best possible sample group.
I also think that equally important to constructing valid and reliable studies is having the ability to distinguish between garbage research and good research as a consumer. I can't tell you how many times I see an article title "Science Proves XYZ" or "The Psychology of XYZ" and the only resources cited are other blog articles and infographics -- and people won't even question the claims because it is published by someone they trust. It is so important as a consumers of research to understand the 10 elements you mention (and even more if possible), especially if you are trying to make business decisions based on "research".
Wow, Annie, I think this is the first time anyone ever told me that my content made them cry. Thankful that it was tears of joy ;-)
Very good point about size of the sample group vs representation. That's what I was getting at by saying "sufficient size," which will be different in different cases. You can only say so much in a 10-minute video, so your expansion is quite helpful.
And don't get me started on the role of journalism when it comes to either "pure" sciences or SEO! Totally agree with you. Too many times people take either very preliminary findings or shaky studies and trumpet them as "proven." Then once one source has reported that (and especially if they have turned it into an Infographic, the Sacred Text of the Religion of Factual Fast Food), then it must be true, right? Oy!
Great WBF Mark. I agree, with all of the many variables involved and how even one change can affect multiple things I think testing ONE thing at a time is the way to go. Once you get into multivariate testing you have a whole different problem on your hands. Very difficult. I think the best way to speed that up is to run lots of single variable tests in tandem. Great list shared by Tony as well!
Thanks, Patrick!
And by the way...happy birthday, Patrick!
Where do you to gather this sample group? (data)
Jhines that's a big question, and depends a lot on what you're trying to test. Wherever your data comes from, you want to make sure you apply some of the principles I shared to make sure it's random, sufficient in size, and controlled.
You were spot on with dark matter indeed seo experts want to know what's going on inside Google and Bing, but it is only by employees of these companies. thanks for a great article
Mark that was great it really reminded me of grade 9 science class ;)
But seriously now,
There is only one problem with this whole science cycle... You can only do it when you have time and that is something that is almost impossible to create when you are new in the biz.
There is a saying in Hebrew that loosely translated: "Learn to shave on another mans beard". Unless you have time, read the results of experiments, don't create them yourself. You won't cut yourself that way.
That is why Moz is so great! Moz is a great beard to practice on ;)
Love the lab coat Mark. Looking forward to seeing more WBFs!
Very true that not all of us need to be scientists in order to benefit from science. I think knowing these principles is still valuable, though. For example, you could use them to evaluate for yourself the potential validity of anyone else's study. A good study or test report should include the methodology used.
A good seo theory to end the week. Thanks Mark !! ;)
Glad you enjoyed it, Ivan!
The theory always fits , but often in the world of SEO is not always accurate . The results are not always the same and is dicifil applied science.
Sure thing; that's why I created this video!
True - SEO experiments can prove or reject some theory in laboratory conditions.
In real life things getting complicated and screwed due black box and "over 200 ranking factors". So far there is security via obscurity and works extremely well for it's creators.
Good point, Peter, We have to always keep in mind that even the best, most controlled experiments usually don't "prove" anything (beyond any shadow of doubt). Google remains largely a black box, and we're doing our best to ascertain what's in the box by testing things that we can see from the outside. But we always have to be a little tentative in our conclusions. That's why the last step I outlined--replication and falsification--is so important.
"Past performance does not guarantee future results."
True - last step is very important.
IMO that is indeed the true crux of the issue. While Mark (hi Mark! :) ) has laid out the overview of how scientists conduct scientific experiments, because there's a black box involved with conducting experiments on the live internet, how can anyone claim a scientifically valid experiment has been conducted?--far too many variables are simply unknown and out of the experimenter's control.
Hi David,
To varying degrees, there is a black box in almost any scientific experiment. There are almost always unseen interactions or uncontrolled/unknown variables. So should we give up on all science? Can science tell us nothing true (or perhaps true enough)?
Of course not. We all know that despite the imprecision, despite the always-present imperfections in any results, science works. We have countless things we do and use every day that we have because science works. I'm typing on one right now. Others in the form of medicines and treatments once saved my life. I could go on....
The point is that many confuse the science with perfect truth, and it isn't that. It's really a quest for ever more precision in coming closer to the truth.
And it's the same in SEO science.
We don't stop doing science because experiments can never be perfect. Rather, we work hard to make experiments as clean and precise as we can, working with what we can control, to get us as close to the truth as we can get.
In SEO, that "close enough" when done right results in real, measurable, positive results for sites.
Great list thanks...
Great and very necessary post! Agree we should be more thorough in our testing and this is a fantastic framework to do SEO exploration. I know some SEOs are more into this sort of thing than others (Dan P, Rand with IMEC, etc.)
On the other hand it can be really hard to get this sort of testing approved in an agency setting. It's easy for the bosses to say "wait until Rand, Mark or Dan do it, they'll write about it, then we'll do that." While that isn't the way to lead from the front, it sure is cheaper.
Great stuff - makes me want to be on the teams that do this sort of work!
Hello Mark, such an awesome video and it's very simple and easy to understand about SEO science. You have explained all small and major points as a crystal clear. This great stuff helps to educate us about which things are positive or negative for our SEO results. I always maintain a tracking sheet to understand about my SEO results.
Thanks for the sharing precious details... :)
my website is https://studysource.in
how to increase organic traffic?
Hi there!
That's a very big question, and you'd be best served by reading through the information in our Learn section. If you have specific questions, you should be able to get answers on our Q&A forum. :)
Best one Thank for Sharing....!!
Hi guys i have website with over 400 articles around 80 procent original.www.beautyzoomin.com i need some help on finding 100 procent copy articles on my site.is any free tool out there or paid tools??can anybody take a quick look .thank you very much.
Hi Mark,
Thank you for your video. I agree with you. SEO must have a scientific approach, Science have given us some wonderful tools thought this centuries and we should use them.
However, I do not agree with your approach. You stated we need to change only one variable as a time as they might be interactions. Well, I think those interactions might be as much important as the variable as itself. If we set up a good experiment we would be able to determine if those interactions are important or not. We have mathematical models to know that and, moreover, we have amazing free tools to determine if those interactions are or not important after the experiment.
Selecting only a variable each time, it's not only a non-efficient way of experimenting but an inappropriate way of doing it, as you are supposing that interactions are ignorable, and science can be based in supposition, only hypothesis.
This article is quite nice:
https://en.wikipedia.org/wiki/Factorial_experiment
I give you an example. Let's say that you want to determine what is the tastiest way to prepare an instant coffee, and for that, you want to know if it's better 1 or 2 spoons of instant coffee. However, you are ignoring the fact that the temperature of the water is relevant as the hotter it is the faster is the coffee going to be dissolved. If you set up the experiment with 2 variables (temperature and amount of coffee) the outcome of the experiment is going to demonstrate you the important of that interaction, that it might be even more important the amount of coffee.
I hope you don't misunderstand me as I only want to show you the current scientific approach used in labs.
I always enjoy watching every WBF, this time though, it was special, and I wanted to thank you Mark.
I am a "new guy" to the SEO world and I've been observing the past and current trends, approach and techniques. After a while, and specially because of MOZ, I realized that SEO was the science of the web. Hear it loud can clear today was very fun to say the least.
Dark matter has to be the best example I have ever hear when taking about SEO mysteries. If I may even go further in the analogy, Dark matter represents +/- 27% and Dark energy represents +/- 68% of the universe. We only understand what they are not, meaning we only understand +/- 5% and the rest elusive. Still, we accept we do not understand and it's what makes it a lot more interesting.
If Dark matter = Search engines
Then Dark energy = humans
Some may argue that we do understand humans... I personally think it's the most complex problem we deal with. At the core, it's humans that created and perfected the search engines after all. A lot more can be said, but this is not my show. ;)
About that double blind technique, I think it's a technique that can be pushed aside to easily. But when you know that cognitive biases can play tricks on you, it's then you realize you must put your perspective out of the scientific process, assuming you're in this for the truth.
Confirmation bias: The tendency to search for, interpret, focus on and remember information in a way that confirms one's preconceptions. Wikipedia
Lastly, Rand mentioned it in it's last Webinar pretension: Correlation does not imply causation.
Thanks again Mark, you made my day and I'll make sure to remember what you shared today.
Great WBF, Seems like high level content in simple words.
This 10 SEO Scientific facts are really nice, and it also work with E-commenrce site .
Great WBF! Nice analogy to the search engines with "dark matter"!
Thanks Gaetano. Maybe the next thing we'll discover is that search engines exist in multiple universes, so there is an infinite number of search ranking factors. That will make our conference slide decks huge!
Glad to see you on camera Sir! :)
SEO is indeed a science but involves with the complete different approaches of scientific principles. Do you think in our industry, testing on mass scale is missing? I just know the Rand's IMEC Lab study but don't know any other related studies going on.. I'm not talking about the case studies that we occasionally see on different blogs rather a complete research lab that solely dedicated to do just research.
Your input will be appreciated!
Umar, IMEC is obviously a great example. (And close to my heart, as it is now led by Eric Enge and myself of Stone Temple Consulting, although Rand remains very much involved and on our board.)
There are others doing good testing though. I'm thinking about developing this video into a much more in-depth article, and if I do I'll certainly cite a lot of good examples.
In the meantime, if I may, I'd point you to the series of big data studies we have done at Stone Temple.
It will be really interesting if you could craft a detailed piece on this topic.
Yes, I am aware of your experiments and they are some serious studies involved with pretty good data set. Are you and Sir Eric working on something in IMEC these days? May we have the sneak peek? :)
Ah now we go from careful scientists to superstitious humans. We believe IMEC sneak peeks jinx our experiments ;-) Stay tuned!
BTW, good moment to mention that quite a number of our IMEC experiments "fail." That is, they yield no significant results. But of course, that can be just as valuable information.
The failure in these kind of experiments unlocks more doors than success! :) I'll definitely be tuned!
Hi Mark! First of all, you are literally looking like a real scientist:)
Secondly, great WBF! For the first time I guess, a topic "SEO Science" has been put forward as a blog post and I really liked the association of the "Dark matter" and "never see inside a Google machine" and that's why we need to keep testing and testing.
Thanks for the compliment, Ajay, and perhaps I should add a lab coat to my regular wardrobe ;-)
In fact, though, a lot of SEO's I know who are really worthwhile understand and practice the scientific part of their profession. I didn't originate this by any means. For example, it's not hard to find numerous examples of Rand Fishkin here on Moz and in his conference slide decks espousing different parts of this concept.
In fact, the lab coat I'm sporting is one he's worn himself in several videos and stage presentations!
Haha, I bet you should!
You are right, Mark. I like Moz and read almost every blog because you can see the research that has been done in those posts. I also love the scientific approach in blog posts on Bill Slawski's blog as well. However, with my point above I meant the name given "SEO Science" is truly appreciable. Thanks again for the blog and replying to my comment.
Very good methodology explained Mark. I believe, testing should be done randomly, particularly in our field. I personally don't trust fully on tests performed by saying specific things. We know how political surveys are conducted and misrepresented. Similarly when we instruct some specific group of people to do specific can give us idea but still it can be considered far away from truth. This is the reason time duration is more important and run test 3-4 times in different time of the year.
Well, as I said, some of those factors depend on the nature of the experiment or test. For example, the length of time needed to get valid results will vary widely. And it is only necessary to test at different times of year if you're testing something that is affected by seasonality.
great topic - in a few points and little time you said a lot of pretty things. keep out the dirt is a hard thing. More than 200 factors, we don't know all - so a user could be dirt (don't get me wrong inm that point of view).
But whats the differnce between 2 and seven?
Andreas, good question. As I watched the video today I realized that I might have not drawn as clear a distinction between 2 and 7 as I had intended.
Obviously they are closely related, but here's why I distinguished them:
#2 is the step of choosing the variable to be tested. That's a very important step. You have to be clear about what it is you're testing.
#7 is then limiting the variable, making sure that you are tweaking only one variable in the test, that it is the variable you wanted to test, and that you aren't inadvertently causing other things to change just by the way you're running the test.
Thanks for the quick answer Mark Traphagen and a nice weekend =)
Mark you are looking half seo and half scientist but the logic you shared is very essential for me and my store. As from last two months my site is suffering from low seo.
[link removed by editor]
Hi Mark,
For testing would you recommend to test multiple variables from same verticals (industry) or we should test it on multiple verticals to see effects on different verticals to have a conclusion to see how 200+ factors effect results ?
Also would you recommend to use duration itself as a variable keeping short term and long term picture in mind ?
Interesting questions, Vishal.
You can have multiple iterations of an experiment on one set of data, each time testing a different variable. Remember, the variable is the one thing you change to see whether changing that one thing has any measurable effect. But it is quite common to then do another experiment with the same data set where you change the variable, and then compare the two results. In fact, you can often learn more that way.
So in your hypothetical. I would take one variable at a time, and then run an experiment in each of the verticals you want to test. Then repeat with the next variable and so on.
The main idea is to isolate and restrict as much as possible what is being altered so that whatever result emerges (if any) is more likely to have either been caused by or at least be correlated with the particular variable in that particular situation.
As for your question about duration, not every experiment is going to have time as a factor. But if you have a hypothesis that time may have an effect, then it might be worth making time itself the variable. For example, at Stone Temple Consulting we recently selected a huge set of tweets and then searched for them on Google to see how many were indexed. The number was pretty small, but we had a hypothesis that Google might index more tweets over time. So we looked again at several different time intervals, and found that, indeed, the number of indexed tweets increased. So in that case, each search performed at a certain time interval made time our variable.
By the way, I would not get hung up on the "200 search ranking factors" (or whatever number; there are strong indications from Googlers that there are really far more than that) as something to test for. You really can't. For the most part, we have no idea what they are. Furthermore, it is highly likely that few, if any, of them work alone. Rather they probably interact in very complex ways. So not only can you not see them; you can't isolate them.
Better to test for what we can see. In one sense, SEO testing is sending one thing (the variable) into the Black Box, and then watching what comes out the other side.
SEO is not now simple process and its SEO Science.. Thanks Mark.
Great WBF, Seems like high level content in simple words.
As a former teacher, you could pay me no higher compliment. Thanks!
hiii mark, Very good video and it's very nice to listen SEO Science, before that I have just listen aboutseo techniques, seo tips, seo tricks and many more. That's write keep experimenting with seo help us to understand which things work positive or negative for our seo results. So I always suggest to seo beginners to maintain a tracking sheet and record your activities and their effect. It will help you to understand seo from depth, Again thank you mark for great video.
Glad you enjoyed it, Jitender!
Great topic. I've definitely seen testing done wrong, and it always makes me cringe. It's easy to mess up the entire test by messing up one step in the testing process. Having a good overview like this makes it easier to bring the powers that be in an organization aligned around HOW tests should be run and WHY they need to be run in a certain fashion.
Awesome post with step by step guidance.... Thanks Mark..
Hi Mark, Thanks for the post. What is understand is that we need to walk around White hat SEO and do not lose the control and to know how exactly what we are doing we need some tools to measure. Little mistake may harm our rankings.
But thanks to MOZ products like Keyword Difficulty, Crawl Test etc. I am not spending lots of time to get starget as before.
Before i publish an article i do as follow.
Please advice if there is anything else i need to do.
Hi Nauman, thanks for your question, but it's kind of off topic for this video, which is about doing high quality experiments to discover fundamentals of how search works.
LOL SEO is not done in a vacuum. The SERPs and the ranking algorithm are not static. Meaning you really do not have control in your experiment. Because you do not control the algorithm. So sharing real time data across a network of similar sites and then compare that with non-similar sites.
A quick movie quote
From Star Trek
Scotty: What's that?
Spock Prime: Your equation for achieving transwarp beaming.
Scotty: [to himself] He's out of it
Scotty: [reads the equation] Imagine that! It never occurred to me to think of SPACE as the thing that was moving!
If this was 1998 you could do this @ But if you're looking at seasonality effects, you might need to go over several years to get a good test on that. We used to be able to do that without too much trouble because the frequency of updates were not as major.
I see so many newbies muck up the difference between correlation and causation; keeps me chuckling.
I am not saying don't test elements for SEO because that is super important and testing has always been the backbone of good SEO practice. I am just saying be mindful that the environment is always changing and you really need solid controls in order to have tangible results.
Speaking of astrophysics my coffee mug says "rocket scientist" lol
Good presentation it made me think about project I am working on.
HAPPY FRIDAY
Thanks "Oldest." I don't disagree. This was a quick summary presentation, so I didn't get into some of the subtleties you and others have brought up. But yes, I don't mean to imply, "Do these 10 steps and you'll be guaranteed to arrive at the Pure Absolute Truth!" That just ain't so.
But as you say, that doesn't mean we just give up on testing. We don't give up because our experiments will always be imperfect. Rather awareness of those imperfections and limitations helps us to more accurately and soberly assess our results.
As Rand Fishkin has said (paraphrasing): "Correlation isn't causastion; but it sure beats nothing!"
See https://moz.com/blog/seo-correlation-causation and https://moz.com/rand/what-do-correlation-metrics-really-tell-us-about-search-rankings/
SEO is like other areas of marketing that relies on science to gain insights. However, challenges arise when markets start reacting to theories that haven’t been tested or proven as fact. There’s something that each person can gain from understanding and using the “scientific method.” This method can also be applied to our day-to-day work routines. There are 10 steps that all scientists will follow.
Thank you for sharing this article. I really find it helpful :-)
https://www.medma.net/knowledge-base/Magento-marketplace-module/2-14