We want to be able to answer questions about why one page outranks another.
“What would we have to do to outrank that site?”
“Why is our competitor outranking us on this search?”
These kind of questions — from bosses, from clients, and from prospective clients — are a standard part of day-to-day life for many SEOs. I know I’ve been asked both in the last week.
It’s relatively easy to figure out ways that a page can be made more relevant and compelling for a given search, and it’s straightforward to think of ways the page or site could be more authoritative (even if it’s less straight-forward to get it done). But will those changes or that extra link cause an actual reordering of a specific ranking? That’s a very hard question to answer with a high degree of certainty.
When we asked a few hundred people to pick which of two pages would rank better for a range of keywords, the average accuracy on UK SERPs was 46%. That’s worse than you’d get if you just flipped a coin! This chart shows the performance by keyword. It’s pretty abysmal:
It’s getting harder to unpick all the ranking factors
I’ve participated in each iteration of Moz’s ranking factors survey since its inception in 2009. At one of our recent conferences (the last time I was in San Diego for SearchLove) I talked about how I used to enjoy it and feel like I could add real value by taking the survey, but how that's changed over the years as the complexity has increased.
While I remain confident when building strategies to increase overall organic visibility, traffic, and revenue, I’m less sure than ever which individual ranking factors will outweigh which others in a specific case.
The strategic approach looks at whole sites and groups of keywords
My approach is generally to zoom out and build business cases on assumptions about portfolios of rankings, but it’s been on my mind recently as I think about the ways machine learning should make Google rankings ever more of a black box, and cause the ranking factors to vary more and more between niches.
In general, "why does this page rank?" is the same as "which of these two pages will rank better?"
I've been teaching myself about deep neural networks using TensorFlow and Keras — an area I’m pretty sure I’d have ended up studying and working in if I’d gone to college 5 years later. As I did so, I started thinking about how you would model a SERP (which is a set of high-dimensional non-linear relationships). I realized that the litmus test of understanding ranking factors — and thus being able to answer “why does that page outrank us?” — boils down to being able to answer a simpler question:
Given two pages, can you figure out which one will outrank the other for a given query?
If you can answer that in the general case, then you know why one page outranks another, and vice-versa.
It turns out that people are terrible at answering this question.
I thought that answering this with greater accuracy than a coin flip was going to be a pretty low bar. As you saw from the sneak peak of my results above, that turned out not to be the case. Reckon you can do better? Skip ahead to take the test and find out.
(In fact, if you could find a way to test this effectively, I wonder if it would make a good qualifying question for the next moz ranking factors survey. Should you only listen only to the opinion of those experts who are capable of answering with reasonable accuracy? Note that my test that follows isn’t at all rigorous because you can cheat by Googling the keywords — it’s just for entertainment purposes).
Take the test and see how well you can answer
With my curiosity piqued, I put together a simple test, thinking it would be interesting to see how good expert SEOs actually are at this, as well as to see how well laypeople do.
I’ve included a bit more about the methodology and some early results below, but if you'd like to skip ahead and test yourself you can go ahead here.
Note that to simplify the adversarial side, I’m going to let you rely on all of Google’s spam filtering — you can trust that every URL ranks in the top 10 for its example keyword — so you're choosing an ordering of two pages that do rank for the query rather than two pages from potentially any domain on the Internet.
I haven’t designed this to be uncheatable — you can obviously cheat by Googling the keywords — but as my old teachers used to say: "If you do, you’ll only be cheating yourself."
Unfortunately, Google Forms seems to have removed the option to be emailed your own answers outside of an apps domain, so if you want to know how you did, note down your answers as you go along and compare them to the correct answers (which are linked from the final page of the test).
You can try your hand with just one keyword or keep going, trying anywhere up to 10 keywords (each with a pair of pages to put in order). Note that you don’t need to do all of them; you can submit after any number.
You can take the survey either for the US (google.com) or UK (google.co.uk). All results are considering only the "blue links" results — i.e. links to web pages — rather than universal search results / one-boxes etc.
What do the early responses show?
Before publishing this post, we sent it out to the @distilled and @moz networks. At the time of writing, almost 300 people have taken the test, and there are already some interesting results:
It seems as though the US questions are slightly easier
The UK test appears to be a little harder (judging both by the accuracy of laypeople, and with a subjective eye). And while accuracy generally increases with experience in both the UK and the US, the vast majority of UK respondents performed worse than a coin flip:
Some easy questions might skew the data in the US
Digging into the data, there are a few of the US questions that are absolute no-brainers (e.g. there's a question about the keyword [mortgage calculator] in the US that 84% of respondents get right regardless of their experience). In comparison, the easiest one in the UK was also a mortgage-related query ([mortgage comparisons]) but only 2/3 of people got that right (67%).
Compare the UK results by keyword...
...To the same chart for the US keywords:
So, even though the overall accuracy was a little above 50% in the US (around 56% or roughly 5/9), I’m not actually convinced that US SERPs are generally easier to understand. I think there are a lot of US SERPs where human accuracy is in the 40% range.
The Dunning-Kruger effect is on display
The Dunning-Kruger effect is a well-studied psychological phenomenon whereby people “fail to adequately assess their level of competence,” typically feeling unsure in areas where they are actually strong (impostor syndrome) and overconfident in areas where they are weak. Alongside the raw predictions, I asked respondents to give their confidence in their rankings for each URL pair on a scale from 1 (“Essentially a guess, but I’ve picked the one I think”) to 5 (“I’m sure my chosen page should rank better”).
The effect was most pronounced on the UK SERPs — where respondents answering that they were sure or fairly sure (4–5) were almost as likely to be wrong as those guessing (1) — and almost four percentage points worse than those who said they were unsure (2–3):
Is Google getting some of these wrong?
The question I asked SEOs was “which page do you think ranks better?”, not “which page is a better result?”, so in general, most of the results say very little about whether Google is picking the right result in terms of user satisfaction. I did, however, ask people to share the survey with their non-SEO friends and ask them to answer the latter question.
If I had a large enough sample-size, you might expect to see some correlation here — but remember that these were a diverse array of queries and the average respondent might well not be in the target market, so it’s perfectly possible that Google knows what a good result looks like better than they do.
Having said that, in my own opinion, there are one or two of these results that are clearly wrong in UX terms, and it might be interesting to analyze why the “wrong” page is ranking better. Maybe that’ll be a topic for a follow-up post. If you want to dig into it, there’s enough data in both the post above and the answers given at the end of the survey to find the ones I mean (I don’t want to spoil it for those who haven’t tried it out yet). Let me know if you dive into the ranking factors and come up with any theories.
There is hope for our ability to fight machine learning with machine learning
One of the disappointments of putting together this test was that by the time I’d made the Google Form I knew too many of the answer to be able to test myself fairly. But I was comforted by the fact that I could do the next best thing — I could test my neural network (well, my model, refactored by our R&D team and trained on data they gathered, which we flippantly called Deeprank).
I think this is fair; the instructions did say “use whatever tools you like to assess the sites, but please don't skew the results by performing the queries on Google yourself.” The neural network wasn’t trained on these results, so I think that’s within the rules. I ran it on the UK questions because it was trained on google.co.uk SERPs, and it did better than a coin flip:
So maybe there is hope that smarter tools could help us continue to answer questions like “why is our competitor outranking us on this search?”, even as Google’s black box gets ever more complex and impenetrable.
If you want to hear more about these results as I gather more data and get updates on Deeprank when it’s ready for prime-time, be sure to add your email address when you:
I feel reasonably confident that humans are not great at this task. The experiment is clearly not accurate enough to get really confident on the exact %s (and I acknowledged its weaknesses in the post) but:
My theory (which I'm still validating) is that neural networks will ultimately do better at this kind of task even when the human is given the same input data (i.e. tool outputs, metrics etc). So yes - it's a reflection of the strength of the data we have available from data and tool providers these days - but even with that in hand, based on my early experiments, I think the neural net is going to win out.
I used SpyFU and based my "guess" upon which ever site had the highest estimated SEO click value. It got me 100% correct. Is this cheating?
Without using SpyFU this would be terribly difficult. Sites with more backlinks were lower in ranking. Sites with better content didn't always win either.
This was a very humbling article, but it really got me thinking. Thanks!
Hah. I applaud your inventiveness. I think that has to be borderline, but I did say any tools, so well done :)
I did the test but I was not reminded of the answers I gave so I couldn't remember if a question was answered correctly or not.
I do feel as though they deliberately chose search terms with dodgy results. One of the higher ranking pages wasn't even live!
That wasn't a pathological choice - I didn't even realise it was down until one of the Distilled team pointed it out - I hadn't even visited the pages before creating the test. I certainly wasn't going through looking at metrics trying to pick tricky ones.
You have definitely made a mistake with one of the TV ones, wrong URL is being used which doesn't directly relate to the query so logically the other one should rank higher - however that URL is the "winner" despite not even appearing in the search results first page.
Sorry trying not to give too much info as it's extremely easy to colour the survey (I saw the graph before I took the test and it was unfortunately very easy to spot the ones that would "trip" you up with this extra info)
Nice post Will, I listened to your advice at Brighton SEO. Keep it up.
Tbh, if you got the answers wrong in that test then you shouldn't be even doing SEO.
I think this is probably more indicative of Twitter users giving quick answers than of the diagnostic ability of SEOs. On that I would be interested to see the same study conducted via email entry for example...
I would also love to see that - the UK results do seem genuinely hard though - when I tried running it with a small set of actual experts, they got similar results (small sample size though).
I took the test , it went well.
Per the instructions: "In answering, you can take as much time, and use whatever tools you like to assess the sites". You don't need to just work off the URL.
Very fun taking the test, you shoud make more, it´s a very way to improve our SEO skills
Hello Will,
You raised a very interesting topic and familiar question ;) So, thanks for that ...
Well I have done the test and not sure what was my selected answers (it's hard to remember at the end, seriously) .. Do you think only URL structure matters to rank high in SERP ? Because it's hard to predict only from the URL that which page will gonna rank..
Actually, there is something missing, let me explain where and why - Many questions sounds like keywords could be ranked on both of the URL's, yeah I can do that..So, there could not be any single answer or already selected by you (as per my point of view)...
I can be wrong, not sure but that's what my understating is in last 5 years. Correct me if I am wrong or suggest something that could help , thanks in advance and appreciate for this blog :)
Thank you !
Will, thanks for the thought-provoking quiz and post. Just wanted to give some two cents. I have little data to support any of this -- it's all hypothetical conjecture on my part.
““What would we have to do to outrank that site?”
“Why is our competitor outranking us on this search?”
Those two questions are exactly the reason that I have largely disassociated myself with the "SEO" brand in recent years. I got sick and tired of people asking questions (such as those two) that were getting increasingly impossible to answer. Too many employers and clients still think -- even in 2016 -- that all one needs to do is wave a magic wand to rank first in Google.
But as we all know, that is not true. Even after we do all of the technical and on-page optimization, there are still countless other factors that go into play. And as Google invests more in AI and machine learning, Google is going to speed further and further away from people like us having any concrete idea of why one thing ranks highly over another.
My hypothesis: Google is thinking more and more like a human being, and no set of SEO metrics will be able to replicate what a human brain thinks. Brands will soon be the top "ranking factor." But how can SEO metrics quantify a brand?
Assuming this is true (and it may well not be true), how should we respond? Well, here's how I personally view things. (And it's just me.)
As Rand once tweeted here, SEO results are increasingly just by-products of doing good web development and marketing. So, I make sure that our website is 100% OK on a technical level and that on-page optimization on pages and posts targets our desired keywords.
But everything else is just good marketing and PR. We don't do "linkbuilding." We don't do "guest posting." Here is just a little of what we do:
-- Create an execute a publicity strategy that is one part getting relevant, authoritative publications and bloggers to write about us and one part contributing thoughtful by-lined articles to such outlets
-- Form corporate partnerships that result in cross-promotion
-- Sponsor and speak at conferences to raise awareness and get people talking about us online and offline
All of this is just good marketing work that one would do even if the Internet did not exist. And this real marketing work results in brand awareness, increased web searches, and inbound links as natural by-products. So, where is "SEO" today when traditional marketing seems to have always delivered the best results.
So, Will, in terms of specific pages and queries and why things rank #2 versus #3, I don't even think about it. I don't worry about the small things. As long as we do all of this marketing work, then our rankings in general and as a whole will increase over time. If the trend is "up and to the right" over the long term, I am happy. Google is too smart to make it worthwhile to think too much about the small stuff anymore.
Create a website that delights your target audience, do the technical and on-page SEO, and then build a good brand. All the rest will fall into place. (Not that it's easy.)
Yup - I'm totally with you on generally not having given this much thought - and in having built many strategies that entirely ignore the page-level ranking detail questions this post covers.
However. I also want to be able to build business cases for the strategies I'm recommending for my clients, and I want to be able to figure out how much benefit a client might get if they are successful in making certain changes I want to recommend or in raising their visibility and authority in various ways.
To do that, I need to understand rankings in a portfolio sense - and with this experiment I'm mainly toying around with whether I can build that understanding "bottom up" from individual rankings. And I do believe that for us to give the best possible advice to our clients, we do need to understand the changing landscape of ranking factors - I started out thinking that this test was testing exactly that understanding!
Finally - to your point about brand and user experience - I am more bullish than it sounds like you are about computers' ability to model those things. I think they will just turn into the "next generation" of ranking factors that Google cares about - and hence will be things that our own models (mental and computer) can take account of.
Many, many, many global brands do an excellent job with all the PR and marketing that you talk about, and are doing pretty well in general - yet they are still missing opportunities in SEO. It is a fact and there are thousands of brand websites that could be doing a lot better in the search results and see increased revenue and profit. Large companies often spend 10x their money on PPC than organic search, when shifting some money to the latter could save a lot of money.
So yes the things you're talking about are great and necessary, but stopping at just the basics of technical optimization and on-page factors is negligence if you are managing organic search for a company.
I got 8/11
I was surprised to see that YouTube link was not the one ranking higher in search for a canon 5d review. Just recently did a research on social media organic ranking and YouTube was pretty much domianting SERPs.
Only way you'd have known better, is if you'd spent time in the photog niche. DPReview is the alpha and omega of everything camera review related. I wouldn't be surprised if every Wikipedia article related to camera technology, includes a link-back to DPReview.
Amazing! This project was just amazing Will! Would love to see others that are similar.
Kinda puts into perspective how week our intuition is without looking at stats and using tools. Got 50% correct answers for the US SERPs, and can't believe just how uncertain I was for almost all of them. Looking back at the results now, and the pages in question, it is kinda revealing.
I guess we should pay more attention to understanding user's intent, and understanding audiences more. Otherwise it's a chase in the dark.
Funny how empowering it feels when you do this test, and then realize that you can add stats like site speed, DA, link metrics, anchor text distribution, social shares and comment counts... Should take each question again, using tools and writing down numbers, just to see if I can make a more informed guess, and whether or not it will change anything in terms of being correct/wrong.
As for combating machine learning by using machine learning- it sounds exciting. If anything, we can always use the results and tests and reverse engineer- learning good SEO practices, and how certain signals are weighted within various different industries and types of queries, along the way.
P.S. never realized I would enjoy these types of tests until now. It's very addictive :)
Excellent post helped me understand you better because you say greetings test
Just completed the survey..would love to see the findings.
Think my SEO guys need to read this!
I have problems with the woorank trial version. When i scan my site. One times i have a note to 53, others 62...
Sometimes dont found social pages or problens whit the personal page error 404 and many diferencies. Archibo robot tx....
When is the problem. Could be a bad servidor because sometime the program dont found the url. Or other problem.
I need help. I am new in this blog sorry for my expresions are incorrect but i am spanish and the english dont speak very well.
Thanks at everybody taht answer me
Hi Will, I have another question; is this statement correct?
It looks like the humans in your test were given urls and queries only. They were encouraged to guess (obviously).
Deeprank was fed SEO metrics such as DA and so on.
That to me is two different tests.
You're actually asking if a. supervised learning can produce an accurate prediction based on commonly available SEO metrics (where, the developers of the metrics have in some cases already tried to use ML to predict rankings and factored that into the data so it should already be correct to a certain degree)
And b. Are SEOs good at guessing rankings if you give them a few urls and queries?
So I'm struggling to see the value of the comparison. Isn't it unfair? What if you gave the SEOs precisely the same input data as your ml models (granted you might have used a large data set to train) but still, you get my point, right? Let them get excel out and do a serious analysis etc!
I think it's really interesting and I definitely agree with you; we could use machine learning to predict some really cool stuff. I just don't think I agree that your comparison of ml to experienced SEOs is fair based on (what i believe to be your) method. Apologies if I've missed this in the article!
The instructions for humans said "you can take as much time, and use whatever tools you like to assess the sites, but please don't skew the results by performing the queries on Google yourself" -- so they had access (if they wanted it) to far more data on those sites in question than the model (which only had about 10 metrics per URL - a handful of link metrics and a handful of quite naive keyword "scoring" metrics). They also could examine the actual pages - which would give them far more in the way of relevance and quality information than the model had access to.
Obviously, I'm sure some people just tried to figure it out from the URLs (which may only show that computers are better at following instructions than people!) - but in small scale experiments with people I briefed and observed taking the test, the overall results were similar.
I could have published the training data, but I don't think that would have necessarily made it fairer - the humans (the expert SEOs at least) had access to far more of that in the way of all the various ranking factor studies etc that have been done over the years.
I am certain there will be flaws in the method, but the more I see, the more convinced I am that computers will be better at this than humans.
We're working on more training data and more metrics for the model - I hope we can see some steps forward in accuracy - will let you know!
I don't disagree!
This reads more like listing out the frustations of being in the SEO business rather than insights ! However, it is a good article in that many managers and owners need to understand and assimilate this. There are 100's of reasons why a site will probably not rank higher than a competitor inspite of best efforts and the only way to resolve this is through iterative processes. Sadly that involves time and time is money which most companies are not willing to shell out. It is a hard business to be in :)
Hey Will, very interesting stuff!
I too have felt in the past year that it's getting harder and harder to manipulate search algorithms and track which changes i've made have lead to positive results. I am admittedly not very familiar with neural networks or deep learning and with RankBrain now being included in every search query, I think it would be a very wise move for all SEOs to follow in your footsteps and start educating ourselves on this type of technology. Thanks for getting me thinking in the right direction!
hi Will, I agree a 100 %. In our branch the money is made buy picking the right client. And it is only fair for the client to know if a SEO agency can rank him for his budget or not. Understanding how to do a competitor analysis correct is what seperates a expert from a mediocre one. Also a SEO consultant can help only so much clients. He show know how he spents his recources.
cheers, Marcus
[Link removed by editor.]
In the UK test the first question should have been fairly easy as at the top of https://www.bbc.co.uk/homes/property/mortgagecalcul... it says "This page has been archived and is no longer used". That suggests that there will be less internal links to it, that it hasn't been updated for a while and that there may be a better page on the BBC website that would rank higher. The Barclays page was therefore the obvious choice. With the others, I admit, it was tricky!
I think one of the things that makes this a lot trickier these days is that Google is better able to measure that allusive value of 'quality content'!
It would be interesting to do a test with say the 1st position result and the 25th position result, I would imagine SEO experts would do considerably better.
Hi Will! It's interesting how the accuracy is just like you said: like a coin flip. At the same time is sad, because it seems that although we know" how SEO works, our accuracy is not so good.
I'm a little stressed. Last week I was in second page for my main keywords and now I'm done! I'm in 3rd page :'(
And the strange thing is that the upper positions are occupied by competitors that have never been before there, even in pages like 4, 5 or 6. It's very strange.
Will, can you tell me please how can this happen and why when I write my main keyword, if I'm using Safari the result is different than If if I'm using Firefox or Chrome. How his could happen? :(
Not a very user friendly focused quiz. I did not recall all of my answers from the quiz.
Wow - Deeprank sounds pretty exciting - glad you're working on something cutting edge like this. And it's comforting to know that at your level of agency maturity you're still being asked this question and it's still not clearcut.
What resources are you using to learn Tensorflow and Keras? Is there a good guide?
I started with the tensorflow "getting started" stuff - the worked example etc. - then did the same with Keras (via docker). Then I spent an embarrassingly long time trying to figure out tensors to make my own models. The Keras github repo has some good stuff (the author has replied to a lot of issues) and apart from that, I just googled, used stackoverflow, etc.
Good luck!
My data is essentially worthless ... the last question gave a server error for the winning one - the other one gave a loading error at first but did load on the third attempt. Ironically, the one with the server error is the "winner". Another contender of a previous question redirected me to a German page. Two other URLs did not load at all.
So, sadly, I'd say that this is heavily skewed by (maybe?) random factors. I mean, some of the pages did not load for me ... if they loaded fine in the test ... it does not take much thought that if one page does not load at all, I'll chose the other one; and that if I am redirected to a totally different URL, I'll guess blindly.
To be sure: I think this is a great idea and I'd love to learn more about how the ranking _really_ works in any way possible. But I hesitate to draw any conclusions from this - for most questions I did indeed feel that I was guessing around, lacking really a lot of context and knowledge. UK (which I chose) not being my "home market" probably did not make it easier.
If I was asked to draw a conclusion from this, is would probably be: I cannot really tell with any confidence at all which of two pages is ranking better if I only look at the query and the two pages.And actually, I am not too surprised by this. Still, thanks for the topic! I'd love to see this lead to some "real" learnings.
Sorry - country redirections could definitely screw things up. It's tricky to make a perfectly scientific study here, but I'm still interested in what I *am* finding.
Regarding the site being down - that's a fascinating one - it was down on the day I pulled the rankings as well. Just one of those weird quirks - it shouldn't still be ranking, but it is...
Maybe I'm just making excuses then ha!
Out of interest, were the results close to each other in SERPs or was there a considerable gap? I found some of them were definitely on a par with each other, such as the mortgage comparison pages. Others varied a lot in quality. I guess page authority still plays a big part.
They varied - some were as close as #2 & #3 while the furthest apart were #2 & #10 or #1 & #7.
I should probably look at how accuracy varies by that distance...
OK - I had a quick glance, and there's no real pattern - people didn't get the close together ones wrong significantly more often than those which were far apart.
"In fact, if you could find a way to test this effectively, I wonder if it would make a good qualifying question for the next moz ranking factors survey. Should you only listen only to the opinion of those experts who are capable of answering with reasonable accuracy?"
Absolutely! Or, at least, a "trustworthy score" given to each expert based on their accuracy. So the opinion of those with a higher score of trustworthiness is worth more than someone with lower results.
I'm not entirely sure how well this test measures the competence of an SEO professional though. Are we basing predictions based on a quick skim of on-page factors only? Or are we expected to freely use tools in order to look at the bigger picture?
Will, don't want to start a snowball effect here but any chance you can pull my results? Forgot to check and genuinely curious how I did.
Did you feel good about the validity of the outcome Will? From what i gather it's a bit unfair to conclude that "most seos are no better than a coin flip...". As Mr Isidoro mentioned, I'm guessing the few expert SEOs who participated had little to no time to consult their tools (did you record the time they estimated took them to reach that conclusion?).
I bet if this was a piece of paid consulting work they'd have produced better results!
Question: you've made a machine learning tool that guesses which page might outrank another (from a choice of two pages?) when fed SEO data (from a variety of sources?). Am I right in saying the 64% accuracy result is more a reflection of the work the tool data providers have put into the accuracy of their data?
I got 10/11. Failed on the last question (I chose Youtube, thought that high authority of domain will play main role, but I was wrong.).
Great!!!!! I took the test and after seeing the result, not much but still able to analyze how does Google think when It ranks website in SERP on top positions. It will be very interesting to see more stuff on that(sharing email id with you)
I tried the test but forgot most of my answers so I am not sure of my score. LOL
Anyway, I answered the items according to on-page SEO guidelines but still I couldn't test the authority of the sites' backlinks. Therefore, in the end, it was 50% knowledge-based and 50% flip-the-coin.
Will - thanks for pulling this together - great way to generate some interesting data and force us to test our SEO mettle. I'll take the quiz soon ;). Am very curious to see more of your insights on this topic as well. I like the trend - at least a signal that "gaming" is getting diminished more and more.
FWIW, you can change any Google Form into a Quiz. This allows you to give each user feedback on their answers. I *think* it's as simple as a couple settings changes + marking the correct answer values in the form (See the link below). Worth a look for this or future quizzes.
Cheers!
Dan
https://support.google.com/docs/answer/7032287?hl=...
Hi Will
Done the test!!
It is noted that not take long in this ... I right about half the issues
Simply put, no one can give you a fair identification of which page will rank better and which page will outrank other. You only have to be honest with your strategies, think about user experience and you will be reaping satisfactory results. But the definition of "Satisfactory result" is still not clear. Its unquantifiable for the most of the times.
This should have been "which url ranks better", since the domain/brand authority and keyword visibility in the url was all that could be viewed within the quiz itself.
I was expecting to gauge my interpretations based more on on-page content/relevancy of the page and its ability to hold user engagement.
I suppose I could have went and viewed each URL individually, but that would have taken away from the ability to briefly take the quiz.
Superb Post ! very interesting Now I clearly understand what SEO is all about .Thanks for such a wonderful piece of informtion
https://imran-khalid.com/
Now, it's very clear. How much you do understand the SEO????
Go and post your link somewhere else.....
Don't spam here!!!