We've been getting a lot of questions in Q+A and on the road at events like last week's Miva Merchant conference, Online Marketing Summit and the YCombinator conference about how to properly paginate results for search engines. In this post, we'll cover the dangers, opportunities and optimization tactics that can best ensure success. The best part? These practices aren't just good for SEO, they're great for usability and user experience too!
Why is Pagination an SEO Issue?
Pagination, the practice of segmenting links to content on multiple pages, affects two critical elements of search engine accessibility.
- Crawl Depth: Best practices demand that the search engine spiders reach content-rich pages in as few "clicks" as possible (turns out, users like this, too). This also impacts calculations like Google's PageRank (or Bing's StaticRank), which determine the raw popularity of a URL and are an element of the overall algorithmic ranking system.
- Duplicate Content: Search engines take duplication very seriously and attempt to show only a single URL that contains any given piece of content. When pagination is implemented improperly, it can cause duplicate content problems, both for individual articles and the landing pages that allow browsing access to them.
When is Pagination Necessary?
When a site grows beyond a few dozen pages of content in a specific category or subcategory, listing all of the links on a single page of results can make for unwieldly, hard-to-use pages that seem to scroll indefinitely (and can cause long load times as well).
Clearly, I need to log into Facebook more often...
But, usability isn't the only reason pagination exists. For many years, Google's recommended that pages contain no more than 100 links (internal or external) in order to make it easy for spiders to reach down deep into a site's architecture. Many SEOs have found that this "limit" isn't hard and fast, but staying within that general range remains a best practice. Hence, pages that contain many hundreds or thousands of links may inadvertently be hurting the access of search engines to the content-rich pages in the list making pagination essential.
Numbers of Links & Pages
We know that sometimes pagination is essential - one page of results just doesn't cut it in every situation. But just how many links to content should the average category/results page show? And how many pages of results should display in the pagination?
There are a lot of options here, but there's serious danger in using the wrong structures. Let's take a look at the right (and wrong) ways to determine link numbers.
In some cases, there's simply too many pages of results to list them all. When this happens, the very best thing you can do is to work around the problem by... creating more subcategories! It may seem challenging or even counter-intuitive, but adding either an extra layer of classification or a greater number of subcategories can have a dramatically positive impact on both SEO and usability.
There are times, however, when even the creation of many deep subcategories isn't enough. If your site is big enough, you may need to have extensive pagination such that not every page of results can be reached in once click. In these cases, there are a few clear dos and don'ts.
Do:
- Try to link to as many pages of the pagination structure as possible without breaking the 100(ish) links per page limit
- Show newer content at the top of the results list when possible, as this means the most link juice will flow to newer articles that need it (and are temporally relevant)
- Use and link to relevant/related categories & subcategories to help keep link juice flowing throughout the site
- Link back to the top results from each of the paginated URLs
Don't:
- Show only a few surrounding paginated links from paginated URLs - you want the engines to be able to crawl deeper from inside the structure
- Link to only the pages at the front and end of the paginated listings; this will flow all the juice to the start and end of results, ingoring the middle
- Try to randomize the paginated results shown in an effort to distribute link juice; you want a static site architecture the engines can crawl
- Try to use AJAX to get deeper in the results sets - engines follow small snippets of Javascript (sometimes), but they're not at a point where this is an SEO best practice
- Go over the top trying to get every paginated result linked-to, as this can appear both spammy and unusably ugly
When in doubt, consider the directives you're optimizing toward - the need for fewer extra pages of pagination, the desire to make the browsing experience usable (many webmasters mistakenly think users will simply give up and search, forgetting that some of us can't recall the name of the piece we're looking for!) and the importance of maintaining a reasonable count of links per page. Also note that although I've illustrated using 5-10 listings (for graphical space requirements), a normal listings set could be 30-90 links per page, depending on the situation.
Titles & Meta Descriptions for Paginated Results
In most cases, the title and meta description of paginated results are copied from the top page. This isn't ideal, as it can potentially cause duplicate content issues. Instead, you can employ a number of tactics to help solve the problem.
Example of results page titles & descriptions:
Top Page Title: Theatres & Playhouses in Princeton, New Jersey
Top Page Meta Description: Listings of 368 theatres, playhouses and performance venues in the Princeton, NJ region (including surrounding cities).Page 4 Title: Page 4 of 7 for Princeton, New Jersey Theatres & Playhouses
Page 4 Meta Description: Listings 201-250 (out of 368) theatres, playhouses and performance venues in the Princeton, NJ region (inclusing surrounding cities).Alternate Page 4 Title: Results Page 4/7 for Princeton, New Jersey Theatres & Playhouses
Alternate Page 4: Description: -
Yes, you can use no meta description at all, and in fact, if I were setting up a CMS today, this is how I'd do it. A missing meta description reduces complexity and potential mis-casting of URLs as duplicates. Also notce that I've made the titles on results pages sub-optimal to help dissuade the engines from sending traffic to these URLs, rather than the top page (which is made to be the better "landing" experience for users).
Nofollows. Rel=Canonicals and Conditional Redirects
Some SEOs and website owners have, unfortunately, received or interpreted advice incorrectly about employing directives like the nofollow tag, canonical URL tag or even conditional redirects to help control bot activity in relation to pagination. These are almost always a bad idea.
Whatever you do, DO NOT:
- Put a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL. You'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).
- Add nofollow to the paginated links on the results pages. This tells the engines not to flow link juice/votes/authority down into the results pages that desperately need those votes to help them get indexed and pass value to the deeper pages.
- Create a conditional redirect so that when search engines request paginated results, they 301 redirect or meta refresh back to the top page of results.
The only time I recommend using any of these is when pagination exists in multiple formats. For example, if you let users re-sort by a number of different metrics (in a restaurant list, for example, it might be by star rating, distance, name, price, etc.), you may want to either perform this re-sort using javascript (and employ the hash tag in the URL) or make those separately segmented paginated results rel=canonical back to a single sorting format.
Letting Users Display More/Less Results
From a usability perspective, this can make good sense, allowing users with faster connections or a greater desire to browse large numbers of results at once to achieve these goals. However, it can cause big duplicate problems for search engines, and add complexity and useless pages to the engines' indices. If/when you create these systems, employ javascript/AJAX (either with or without the hash tag) to make the pages reload without creating a separate URL.
(the Google Analytics interface allows users to choose the number of rows shown, though they don't have to worry much about crawlability or search-friendliness)
Also remember that the "default" number of results shown is what the search engines will see; so make that count match your goals for usability and SEO.
Additional Resources
- A Gallery of Pagination Examples and Recommendations from Smashing Magazine
- A Farewell to Pagination from SEOmoz's Whiteboard Friday series
- The SEO Pager Plugin for Wordpress is a highly customizable set of options that allows you to create search-engine friendly pagination in Wordpress's CMS from SEO Egghead
If you have any thoughts or recommendations to share in the comments, we'd love to hear from you!
This is the most comprehensive article on pagination and how it affects SEO that I have ever read. Very useful. Thanks. Bookmarked.
Edit: Oh, and one thing I have done on some of my e-commerce sites is a little "white hat cloaking". I know that this site's product categories never go over 100 products, so, on the product browse pages, I detect search engines and make the default display "all products" for them. For all other users the default is 30 products per page. This has really helped to get all the products indexed. On a site with a larger number of products you can make the default 100 products per page for the engines, while keeping it manageable for users.
Here's the PHP code to detect search engines for anyone who wants it. I'm sure it could be updated to be more comprehensive, I wrote it a while ago.
Sweet Darren! That's a great additional tip.
wish i could give you 10 thumbs up for sharing that script. excellent stuff, been meaning to do some user agent based scripting for a while, this will make life easier ;)
Nice little script you got there. Thanks for sharing it!
Great Post and response from Whitespark thumbs up to both.
Question though re detecting bots and showing them one thing while real users get a different page of results - is that not very dangerous territory?
I fully appreciate it is not cloaking of the spammy kind, not at all, just concerned at how it might be treated by Big G if it detects the result for a bot is different to the result for a user?
Hmmm not sure if you can call this white hat cloacking:
"Cloaking is a black hat search engine optimization (SEO) technique in which the content presented to the search engine spider is different to that presented to the user's browser."
While I found your script and technique very interesting and will most likely give it a try, I think what your doing is very black hat wise. Be interested to know what Rand and like thinks about it? I am sure most would be able to get away with it, but it officially goes against Google's rules.
In any event you have a thumbs up.
Cloaking is definitely a sensitive topic in SEO. I believe there are legitimate and safe uses for it though. I figure if it makes it easier for the search engines to index my site, then it will make Google happy. I'm not trying to to do anything deceptive, and I wouldn't hesitate to tell Matt Cutts what I'm doing if I ever bump into him at a conference. It's probably better to call it "IP Delivery" as the word cloaking has a negative connotation.
See this 2006 SEOmoz article on Cloaking for more information.
Unfortunately, this may be a fine line to walk. Totally agree that the intent is a positive one.
But this is directly targeting user-agent and serving up content entirely on that. Perhaps with IP lookup, but even then, it could be risky.
Unfortunately, one of the problems is that the site it may be used on may not have enough reputation to "let it fly." Sadly, smaller names could get smacked down even without crossing the line, even if the intent is good, while big name sites may cross that line with less than honorable intent with little more than a "tsk, tsk" from the engines.
If the number of "items" are still somewhat limited, say 100/page or so, I'd rather play it safe and serve that "view all" or expanded listing, allowing users to select their preferred listing number and using cookies from then on to deliver their expected presentation. This way, engines and users (initially or w/o cookies) will see the same thing.
I don't really think this is too risky. The content I'm displaying to the engines is the same as the content I'm displaying to human visitors. It's just a small configuration change, and one that a visitor can choose themselves, so I think it's pretty safe. It's not like I have some hidden content that is only displayed to search engines. This site has been up and running for a year and has been on a continual climb in the search engines.
I'm quite confident that if you asked a member of Google's search spam team, they would say this is no problem. You could play it safe if you're concerned, but the benefit of this technique makes it worth any perceived risk. I really think the risk is zero anyway.
@identity:
I did a little more research, and I have changed my opinion. This is in fact a somewhat risky practice. Google clearly states that if you specifically target the bots and show them different content than human visitors, then this IS cloaking, and is not recommended. See this video (jump to 2:53 to see the section on cloaking).
I plan to keep my "view all" for search engines in place, but I agree with you that it is a risky practice. So far it's working well, but I guess we'll see if anything happens in the long run.
Your specific experience may be okay, as might others...my comments are more around the general risks to anyone employing this.
Their definition here is very tight. That isn't to say they don't look past this, but this rigid definition allows the "we clearly stated and you ignored" stance on their part.
This could also be an instance where other practices that a site may be employing could be the tipping point that gets them dinged for this where another comparable site would be okay.
I'd say the industry generally agrees that Google's definition in this area is extreme, but that isn't to say that how they treat it is as cut and dried. Definitely an area that needs to be employed with an understanding of the risks and an awareness of everything else happening on a site.
Thanks for digging into this more Whitespark and coming back with this opinion too - thats another thumbs up from me.
I didn't mean to cause a minor stir, honestly!
To achieve the same desired effect without detecting the agent, why not just serve up the 100 results all the time and use ajax to trim the results to 30, which will only be seen by the human user. Just a thought.
I use your neat piece of code for sometime now and it's v/ useful in many occasions. So I take this opportunity to thank you :)
Pagination is a huge issue for ecommerce. Add if presentation elements, e.g., various sortings, are crawlable, you may end up with a big bowl of duplicate soup on your hands.
Ecommerce is especially challenged because there is often so little "content" to distinguish page 1 from page 2, 5, 25, or 100...product names, maybe SKU numbers, and pricing as often about it. Some of the worst is online shoe retailers, which often have large inventories and large amounts of pagination even after subcategorizing to the lowest possible levels.
- breaking into smaller subcategories is definitely a key area that is often overlooked.
- switching to a view all or expanding to a larger product set can go a long way to minimize or even eliminate pagination. Users can always be given a choice of "number of products" to be shown, which could be stored in a cookie for them.
- if any pagination still exists, be sure that you don't inadvertently generate duplicates.... the link back to "page 1" is often a different URL (e.g., widgets.html vs widgets.html?page=1) or that the URL for page N is different than the "next" or "previous" link, even though they are the same destination.
And if the presentational construct links like sort by price, alphabetical, ratings, brand, etc. are all crawlable, then forget pagination issues for now...you've got a much bigger problem to address than pagination.
Each one of those modifiers can probably be sorted "up" or "down" (ascending or descending) and each of those probably creates a unique URL. Multiply that by the number of sorting criteria, and even a single page of results without pagination is serving up multiple duplicates.
Great article, Rand. The only thing I would add is one really simple don’t. Bear with me because it’s such a no-brainer that I’m embarrassed to mention it. I’ve worked on several clients that have gone through the trouble of writing copy for paginated categories like these and they add the optimized content somewhere below the fold. You can argue that this isn’t all that worthwhile, which I would possibly agree with, but the one thing you don’t want to do is implement it the lazy way. I’ve seen some developers add this content to every page in the results instead of only the first page. This would obviously be a problem if you are trying to make the first page the most relevant page for that category.
There was some talk above about noindexing these pages because they have minimal value. Although it’s true they have minimal SEO value, it’s better than having no value. In other words, there’s no downside from having them, so why not let them get indexed? And more importantly, as Ehren mentioned, these low value pages might be the only way for search engines to get to certain lower level product/article pages. Having a Meta Robots “noindex, follow” will allow PageRank to flow, but it will eliminate any relevance that can get passed on through these pages. You’d also minimize the value of any random external links that might pop up to that low value page.
Good point, "why not let them get indexed"? My answer would be: because the content of these sort of pages usually is subject to constant change. And even if the content stays the same, it's not unique, it adds no value.
Great post Rand!
...and thanks for that juicy link :-)
i have dealt a lot with paginated archives ... here's a few tips:
- increase the number of links per page to somewhere around 250. although the engines loosely recommend "about 100", i have used "about 250" for years with no negative impact. by increasing the size of the result set for each page, you decrease the extent of the pagination
- code a unique title boilerplate for at least the first 5 pages of the archive. (i usually code a unique meta description too although the article's idea to drop the meta description in interesting.) then, on page six, cycle back to use page 1's title but append the page number. introducing unique titles/metas to the first n pages of the archive had huge impact in 2009
- someone already touched on this, but provide the crawler with paths to the top, middle & bottom of the paginated structure from each page. we used a logarithmic solution for this, so the crawler ostensibly had a path to all sections of a huge structure from each entry page.
- finally, be aware that this type of archive is really hard to seo. in late 2009 google started to get much more selective about pages like this. 2010 & beyond it will just get harder to get these types of thin pages indexed, so manage expectations internally properly from the start
good luck!
I like your idea of creating unique meta-descriptions for at least the first 5 pages...think I'm going to try that! Thanks!
awesome article, timing even better! if we are not to use rel=canonical to funnel the juice, do we want the engines to index our paginated pages? should we noindex, follow the paginated pages? ... if you were building a cms today :)
this is why i don't quite agree with this post....
why do you want the pages to be indexed? what value does having a page indexed with page title: page x of y for [title]
In most cases, apart from the page title the content is going to be exactly the same apart from a load of results. Theres no valuable content really. I see the value of allowing the spiders to reach all of the lower level content, so would noindex,follow not do?
And usability: i dont personally see the value of showing all the pagination. I'm not going to click to page 86 of 100 expecting to suddenly find what i couldn't find on page 1 to 5. If i'm going to keep clicking, the numbers are arbituary.
I think it all depends on what type of content is available exactly.
Take for instance, a simple e-commerce site that's selling stuff, if your looking for something in particular won't you go through almost all the pages to find it? This is from a usablilty point of view...
true, i guess it does depend a lot on the content
The large directory which I manage has to have all of it's pages indexed. We offer a directory of very specific product suppliers and (in simplistic terms) we get paid by them for each referral from our site. Therefore I need to organise the huge amount of company listings in such a way that they are all able to be found by the search engines. Most of my visitors will come through directly from search, so they might land on page 86 out of 100 because that's the product supplier they are looking for.
It is a very specialised industry and so although there is little "unique content" from a traditional SEO stance, it is very valuable to my clients to be listed on the directory as the main site has a heavyweight industry presence which means (once I've finished working on it) the directory will have a good amount of link juice going through to it.
So I have two issues, handling people who browse through on the site (so I have to be careful with confusing sub-categories) and those who land directly from search (I need to make sure the pages are easily accesibl, which might mean sub-categories.) It's a bit Catch 22 for me.
Seojimbo said about problem of showing all the pagination links. I've read somewhere about this solution:
<< 1, 2, 3, 4, 5 ... 10 ... 50 ... 100 >>
So, on this example, you give possibility for 101st pagination page to be reached in the same depth as 6th page.
You're talking about Maciej's great post: Testing How Crawl Priority Works, and you're right... it can be really useful to boost indexation if used correctly.
I've tested it myself, and was able to increase to number of pages being indexed in G.
You don't want the paginated pages themselves showing up in search results (hence the non-optimal meta data). But you do want link juice to flow through these pages to the actual content pages that are listed on them.
Suppose you have 11,800 products or articles. In one big browsable list with 100 per page, you'd have 117 pages of results. Page 57 out of 117 pages of results is not very valuable as a landing page, but it may be the only page on your site that contains links to your 5701st through 5800th best product pages or articles.
This makes sense, the article doesn't, what's the point of even having SEO Tips and Tricks Page 130 in the Index at all? A smart way of doing this would be dynamic page titles based on say 100 variations of keyword research so page 100 would be long tailed to death page 1 would be kinda competitive.
Nicely put rand. Displaying most content with least pages is crucial for both engines and users :)
Wow... Nice points rand. I'll try to be more careful with paginations
This couldn't have come at a better time for me as I am in the process of trying to sort out a huge directory on one of my companies sites that is in, what can only be described as, a freakin' mess.
I think you have to be very careful with sub-categories though. I question whether they will increase usability for the human user because if you get too granular, people won't actually know what they are looking for. In your example, there may be a public hall which fills many different roles and so people won't know what category to look in for it. They may get annoyed and leave, missing the best entry for their purposes. Damn these humans!
However, there is a great deal of useful advice I shall be taking from this article, thanks.
Google heard your voice SEOMozers.
Google's solution bends towards "View All" option.
https://googlewebmastercentral.blogspot.com/2011/09/view-all-in-search-results.html
Great article and awesome information.
Also some great ideas, something I didnt think of
Impeccable timing Rand Man. I'm just now starting to prep for an ecommerce site.
This post was so meaty, I've created a new "pagination" category in my records. Many thanks for another quality post.
Hi Rand,
You said that we should not "Add nofollow to the paginated links on the results pages. This tells the engines not to flow link juice/votes/authority down into the results pages that desperately need those votes to help them get indexed and pass value to the deeper pages."
The problem is that Webmaster Tools is counting these pages as duplicate content (ie. duplicate meta descriptions and title tags) since I have sorting links in the search results.
eg.
resultspage.asp - original page that should be indexed, and counted.
resultspage.asp?SortBy=L&SortDir=DESC&SearchText=resultspage.asp?SortBy=L&SortDir=DESC&SearchText=resultspage.asp?SortBy=P&SortDir=ASC&SearchText=resultspage.asp?SortBy=R&SortDir=ASC&SearchText=resultspage.asp?SortBy=C&SortDir=ASC&SearchText=
How can I fix this prolem then? Please help.
I got a habit of searching for things like "pagination seomoz" and "SEOQuery seomoz" in Google, and luckily, I always land in a perfect page...
Pagination theoretically seem to be a good way of improving the resutls' indexation. However, as my research has shown second or further listing pages are not visited by Googlebot too often.
Even if the first listing page is visited 15 times a day, the second may be entered once 3-4 days, which practically gives us no value.
This is mainly because of the fact that further listing pages do not have any external links and very few internal ones. In order to increase the indexation it is much better either to increase the number of elements on a page or provide additional categories,
What do you guys think about something like:
<link rel="prev" href="/page-2/"><link rel="next" href="/page-4/">
I see there hasn't been a post on this article in a while, but the information on the drop down menu was valuable to me because I have an eCommerce website. I didn't stop to think that could cause problems with duplicate content.
On product pages should you include the navigation and footer links in the 100 link per page limit?
Rand, Excellent article! right on target!
What if you have say 10,000 jewelry products and a Faceted Navi
user selects Type: Rings + Metal: Silver ; as a result they get 600 products, say 50 per page.
They can refine further by Size, or Stone - these are all dynamically created pages with pagination. Creating DIFFERENT META data for first few pages doesnt make much sense as the products are still the same and coding VIEW- ALL for google with a page of say 300 products doesnt seem like a solution.
Do the bots count the header and footer links in the 100 links per page limit? Is it not 100 unique links per page?
Brilliant post, once again!
I have a question though; I am quilty of using canonical tags on my pagers pointing to page 1 of the results. :-(
I have dup content issues. We have 800 000+ pieces of content we are trying to display on 1 site and we only show the latest 12 weeks worth... So as my content is arranged from newest too old, and obviously as the content gets older, the same content is displayed on all pages.
So if I can’t use canonical, what would you recommend? I am planning to create sub-sub-categories and nofollow as recommended here: https://www.seomoz.org/blog/whiteboard-friday-a-farewell-to-pagination
can u tell me how to use htaccess for URL
i am getting query string in url in pagination and i don't want to show
actual URL in pagination
https://localhost:1111/pagination/example-form.php(after serch)
https://localhost:1111/pagination/example-form.php?page=3&ipp=25&tb1=Asia(when i click on pagination page 2)
i want
https://localhost:1111/pagination/example-form.php after pagination click
What is recommended for many results, say 800+ that can be filtered?
Say you begin with SEO Companies
Then you filter by a City
Then you filter by those that specialise in Keyword Research
The first set of results (these days) are recommended to be chained together with rel=prev/next which creates one long page for Google.
Does that then mean that the filtered categories of content are duplicates of the non filtered top level?
" What is Pagination ?" question by Interviewer in my last Interview. I don't hv any Idea about Pagination :-(. I lost my Interview :-(. Finally got my concept clear by reading this, Thanks @Rand. If I were read the post before .. I was a part of a brand ECommerce portal.
Great piece of content you have shared in your blog. Thanks to you.
Are there any suggestion about the URL structure? Should one go with something like https://moz.com/result?page=2 or https://moz.com/result-page-2 or even something else?
Under the Do section could you expand on these bullets further?
Possible illustrations would help.
Thanks,
BRLM
Great information. Great point about making pagintated page title tags less than optimal but not nofollowing them! I'm still not sure about the suggestion of not worrying about descriptions for paginated pages. Does anyone else question that?
Very comprehensive and useful. Thx much. I know this is 2+ years old but from our research it seems to still hold true. Applying this to our blog post pagination.
the note about abstaining from rel=canonical on paginated content is more than debatable. this will only work if you somehow make the titles and metas on the extra pages at least 90% different from the boilerplate on the main category page which is quite a pickle for most major shopping carts. as for the rest you will end up with dispersed link juice especially the one built from incoming links from other domains. yes you'll have 5 pages instead of 1 in the google index but that 1 main page will never rank as solidly for your term as you;d want it to. it will keep shuffling back and forth like a deflated ball, pos. 7 to out of top 500, then to #17, then back to #7 until you will finally get an OOP while trying to stabilize it though increased legitimate non-keywordey link building. OOP is then loosened as you place the canonical back on. which do you prefer an index boost with the eventual loss in traffic because none of the new category pages ranks as well as the main used to, tinkering with magento for hours trying to make the paginated meta content as unique as possible or simply save your time with a classic suggested use of the canonical tag?
I am still learning here. Wont creating a sitemap do this job? Now I am confused.
As usual, great article Rand.
Its applicable to SEO'ers and developers as well. That said, the developers are most likely going to leave it to the SEO to realise their mistake then it'll come back to them (the developers)!
I'm totally for sub-categorizing when it seems as though links are becoming too many on a page, so I was actually quite surprised when Jen&Danny (White board Friday) said lots of people would rather put nofollow tags to help with the link jucie as opposed to further categorising their content. I guess this just stems from me not being much of a fan of internal nofollow links. If there's a nofollow link then maybe it shouldn't be on that page in the first place??
For me, I'll rather sub-catogorize my links first then consider other methods of pagination if there are still lots of links on the page.
Thanks for the tips...
I agree with that, sub-categories also means more pages and content to target very specific searches - not really something suitable for paginated pages
Great article that covers seo issues with paginations, thanks, Rand.
The diagram looks really similar to wordpress structure :D
For nofollow, rel=canonicals and redirects, I don't spend any times on this as I am not worry about pagination (i use wordpress) will cause any SEO trouble.
Excellent article again!
One point of note I've seen is that alot of development tools now handle pagination for you which saves the developers a lot of time. However the coding on the pagination is normally very bad and as a result require alot of reworking to get them looking and functioning half respectably. I'm looking at you Visual Studio....
Do you have any idea on %s of sites that this might affect or what the possible impact could be on indexation/rankings?
Definitely agree with the quality of the article.
In case of an article split in many different pages, I reckon that the best solution is not give access at all to the secondary pages from the internal search engine thus avoiding further possible problem and being sure user get access always from the main page (unless it has the address bookmarked somewhere).
This obviously help reduces number of pages listed for small web sites; however a blog sooner or later will fare with the problem of the pagination and I don't foresee so much problems once good metas are used.
In a post like this, surely we may not forget about the usability of our pagination! As it turns out, the 'ultimate pagination' quite well matches the guidelines you described Rand!
https://v1.wolfslittlestore.be/in-search-of-the-ultimate-pagination
This definetily goes into my bookmarks. Unfortunatelly I was one of those to use nofollow for pagination, mainly believing each page was duplicate content. I'll revise that policy now.
You've touched a hot topic for me so, a big thank you.
Take a look at my blog post 'The Impact of Pagination on SEO' over at https://www.epiphanysolutions.co.uk/blog/the-impact-of-pagination-on-seo/
It outlines some of the best ways into dealing with pagination whilst managing SEO.
Great post today, Rand. I'm not sure my CMS will allow me to fiddle with things to the extent that you suggest, but I should be able to find the best option, given the multiple suggestions you provided. Thanks!
Its easy to get stung with the conditional redirects, especially if it looks like cloaking
Basic, time consuming and manual work. Albeit it is what is necessary to win. Reminds me of the saying you can have it good, fast, cheap; pick two.
what happens if a blog article like this gets thousands of comments and there are links with each one?
does google actually drop your rankings based on that?
That's a good point and something that makes me question the "100 links per page" idea. The comments on this site aren't paginated (thank God), nor are the articles themselves. Assuming that their are at least 5 - 10 links in the article itself, 13 links in the main navigation, 2 in the meta info, ~30 in the sidebar and probably an average of 150 comments per article, each with a link to the user's profile and excluding any links (i'd say about 5 per 50) then that's well over 200 links per article on this site... I seriously doubt Google would penalise for that.
Originally that may have been a harder limit due to bandwidth and other technical issues.
Now, I'd say the 100 links or less is more of a best practices target. Along with that is the idea of usability as a guide. In that case, even a blog post with lots of comments and links may still be useful; but a sitemap like page with 1,000's of links with no sense of organization, isn't very useful and its value may be downplayed by the engines...whether that downplaying can be down via algo vs. by hand is another matter.
Ah I see. For me, a best practice would be to limit to 100 links in the main page excluding any comments, which I guess is what you're saying there. I'm sure Google's algorithms can figure out what is a comment and what is a post anyway.
Thanks for the info ;-)
There are a few good pagination plugins, I havent tried the one you linked to, but will look into it.
I think there is also some confusion that needs to be addressed regarding Wordpress categories and SEO. I have heard both sides of the argument, that it is/not a good idea to have categories and/or tag pages indexed.
For having categories/tag pages indexed: Promotes deeper indexing
Againt: Makes sure that the specific posts get link juice and rankings instead of categories / tag pages.
Thoughts?
Great comprehensive post, I really like the title & meta description idea! This is definitely something that I’ve missed before as a developer.
Thanks for article Rand, one thing that has always concerned me is the best method of displaying pagination on our own pages but you've made it seem really easy.
A few individuals here have spoken about splitting up their articles into several different pages should they need to. For me, it's a terrible thing to do and I don't think it's just because I have a huge screen (21.5" at home, 27" at work). My reasoning behind this is that when I view a page on a site, I want the whole page to be displayed for me - the internet is not a book. Scrolling down to the bottom of a page and seeing a "page 1 of 1,000,000" is one of the most off-putting things for me. If possible, always include all of your text on the individual page unless it exceeds a ridiculous amount (in which case it's probably got a lot of conjecture in or would be better as a book anyway).
Mystery solved. So it's YOU in the picture trax! traxor at work
Haha, that actually made me laugh.
Does google index the canonical URLs content of paginated posts or it only index the main URL?
For example the content at www.example.com/article-seo/1... /2 gets indexed or not when the paginated urls are set to cannonical ?
how does google see it ? if it does not index the content of paginated posts then it will effect the ranking of article as it will only index the content on main url which is www.example.com/article-seo/
Hi Mohsin! Again, this question is far better suited to our Q&A forum. :)
Sorry but this article doesnt jive when we are talking vBulletin forums and pagination.
Great Post, now I have more knowledge about that Pagination.. So I really need it for SEO..
Seriously? These attempts at sneaking links into comments are getting worse by the minute.
You should see what the spammers do with the older articles. Jen is kept busy having to deal with it constantly.
When you see these, help make her job easier and send her an email [email protected].
Links? what links? ;)
Not sure, but I'm pretty sure there were some sneaky links there somewhere.