In early June of this year, SEOmoz released some ranking correlation data about Google's web results and how they mapped against specific metrics. This exciting work gave us valuable insight into Google's rankings system and both confirmed many assumptions as well as opened up new lines of questions. When Google announced their new Places Results at the end of October, we couldn't help but want to learn more.
In November, we gathered data for 220 search queries - 20 US cities and 11 business "types" (different kinds of queries). This dataset is smaller than our web results, and was intended to be an initial data gathering project before we dove deeper, but our findings proved surprising significant (from a statistical standpoint) and thus, we're making the results and report publicly available.
As with our previous collection and analysis of this type of data, it's important to keep a few things in mind:
- Correlation ≠ Causation - the findings here are merely indicative of what high ranking results are doing that lower ranking results aren't (or, at least, are doing less of). It's not necessarily the case that any of these factors are the cause of the higher rankings, they could merely be a side effect of pages that perform better. Nevertheless, it's always interesting to know what higher ranking sites/pages are doing that they're lower ranking peers aren't.
- Statistical Signifigance - the report specifically highlights results that are more than two standard errors away from statistical significance (98%+ chance of non-zero correlation). Many of the factors we measured fall into this category, which is why we're sharing despite the smaller dataset. In terms of the correlation numbers, remember that 0.00 is no correlation and 1.0 is perfect correlation. It's in our opinion that in algorithms like Google's, where hundreds of factors are supposedly at play together, data in the 0.05-0.1 range is interesting and data in the 0.1-0.3 range potentialy worth more significant attention.
- Ranked Correlations - the correlations are comparing pages that ranked higher vs. those that ranked lower, and the datasets in the report and below are reporting on average correlations across the entire dataset (except where specified), with standard error as a metric for accuracy.
- Common Sense is Essential - you'll see some datapoints, just like in our web results set, that would suggest that sites not following the commonly held "best practices" (like using the name of the queried city in your URL) results in better rankings. We strongly urge readers to use this data as a guideline, but not a rule (for example, it could be that many results using the city name in the URL are national chains with multiple "city" pages, and thus aren't as "local" in Google's eyes as their peers).
With those out of the way, let's dive into the dataset, which you can download a full version of here:
- The 20 cities included:
- Indianapolis
- Austin
- Seattle
- Portland
- Baltimore
- Boston
- Memphis
- Denver
- Nashville
- Milwaukee
- Las Vegas
- Louisville
- Albuquerque
- Tucson
- Atlanta
- Fresno
- Sacramento
- Omaha
- Miami
- Cleveland
- The 11 Business Types / Queries included:
- Restaurants
- Car Wash
- Attorneys
- Yoga Studio
- Book Stores
- Parks
- Ice Cream
- Gyms
- Dry Cleaners
- Hospitals
Interestingly, the results we gathered seem to indicate that across multiple cities, the Google Places ranking algorithm doesn't differ much, but when business/query types are considered, there's indications that Google may indeed be changing up how the rankings are calculated (an alternative explanation is that different business segments simply have dramatically different weights on the factors depending on their type).
For this round of correlation analysis, we contracted Dr. Matthew Peters (who holds a PhD in Applied Math from Univ. of WA) to create a report of his findings based on the data. In discussing the role that cities/query types played, he noted:
City is not a significant source of variation for any of the variables, suggesting that Google’s algorithm is the same for all cities. However, for 9 of the 24 variables we can reject the null hypothesis that business type is a not significant source of variation in the correlation coefficients at a=0.05. This is highly unlikely to have occurred by chance. Unfortunately there is a caveat to this result. The results from ANOVA assume the residuals to be normally distributed, but in most cases the residuals are not normal as tested with a Shapiro-Wilk test.
You can download his full report here.
Next, let's look at some of the more interesting statistical findings Matt discovered. These are split into 4 unique sections, and we're looking only at the correlations with Places results (though the data and report also include web results).
Correlation with Page-Specific Link Popularity Factors
With the exception of PageRank, all data comes via SEOmoz's Linkscape data API.
NOTE: In this data, mozRank and PageRank are not significantly different than zero.
Domain-Wide Link Popularity Factors
All data comes via SEOmoz's Linkscape data API.
NOTE: In this data, all of the metrics are significant.
Keyword Usage Factors
All data comes directly from the results page URL or the Places page/listing. Business keyword refers to the type, such as "ice cream" or "hospital" while city keyword refers to the location, such as "Austin" or "Portland." The relatively large, negative correlation with the city keyword in URLs is an outlier (as no other element we measured for local listings had a significant negative correlation). My personal guess is nationwide sites trying to rank individually on city-targeted pages don't perform as well as local-only results in general and this could cause that biasing, but we don't have evidence to prove that theory and other explanations are certainly possible.
NOTE: In this data, correlations for business keyword in the URL and city keyword in the title element were not significantly different than zero.
Places Listings, Ratings + Reviews Factors
All data comes directly from Google Places' page about the result.
NOTE: In this data, all of the metrics are significant.
Interest Takeaways and Notes from this Research:
- In Places results, domain-wide link popularity factors seem more important than page-specific ones. We've heard that links aren't as important in local/places and the data certainly suggest that's accurate (see the full report to compare correlations), but they may not be completely useless, particularly on the domain level.
- Using the city and business type keyword in the page title and the listing name (when claiming/editing your business's name in the results) may give a positive boost. Results using these keywords seem to frequently outrank their peers. For example:
- More is almost always better when it comes to everything associated with your Places listing - more related maps, more reviews, more "about this place" results, etc. However, this metric doesn't appear as powerful as we'd initially thought. It could be that the missing "consistency" metric is a big part of why the correlations here weren't higher.
- Several things we didn't measure in this report are particularly interesting and it's sad we missed them. These include:
- Proximity to centroid (just tough to gather for every result at scale)
- Consistency of listings (supposedly a central piece of the Local rankings puzzle) in address, phone number, business name, type
- Presence of specific listing sources (like those shown on GetListed.org for example)
- This data isn't far out of whack with the perception/opinions of Local SEOs, which we take to be a good sign, both for the data, and the SEOs surveyed :-)
Our hope is to do this experiment again with more data and possibly more metrics in the future. Your suggestions are, of course, very welcome.
As always, we invite you to download the report and raw data and give us any feedback or feel free to do your own analyses and come to your own conclusions. It could even be valuable to use this same process for results you (or your clients) care about and find the missing ingredients between you and the competition.
p.s. Special thanks to Paris Childress and Evgeni Yordanov for help in the data collection process.
Can't wait to dive into this after I get back from the Kelsey Group show. Perhaps for next time (which it sounds like you didn't measure this time) would be whether 'number of possible relevant businesses' has a correlational effect on certain ranking factors. I.e. in smaller towns with fewer businesses of each type, do certain factors play a larger role? Vs. competitive industries in larger towns, where there are more signals to rely on? Are certain signals weighted more heavily as Google is able to place more confidence in them?
In Mike Blumenthal's original collaborative quantitative study, it did seem like those signals varied along a continuum.
At any rate, a great service for the industry to have this data published. I will do my best to contribute full thoughts when I get back from the Kelsey Show over the weekend.
It might be helpful to break this kind of study down into much smaller segments, to learn more about different aspects of local search.
I think that this study may have chewed off more than it should have, to start with
Some suggestions for followup research on local ranking factors:
1. Google's local algorithms focus upon identifying businesses or other geoentities at specific locations, rather than ranking websites for those locations. Skipping out on "location prominence" was a missed opportunity, and a recent Google video stated that it was one of the main factors in local search ranking. See:
https://www.youtube.com/watch?v=L1ONMavPX2o&feature=player_embedded
2. An interesting smaller, but useful study would involve looking closer at how Google might determine that a particular site should be the authority associated with a particular business. For example, I've seen Google associate the cafeteria services web site for one state capital buliding as the authority website for that building, even though there were much better choices, like the state legislature, or the tourism site for that building, or even a few others.
3. Google's "location sensitivity," which is clearly in use at Google Maps, means that different queries may involve looking at a larger or smaller scaled map, and may invoke different approaches to ranking businesses/entities at different locations. For example, a search for "pizza" in an urban area is going to involve a very different sized map than the same search in an urban area. A rural or suburban search for pizza may stress distance from a centroid more than the same search in an urban area, which may call location prominence into play much more.
4. About centroids, how does Google choose what might be a centroid? The patents hold some clues, but a study would be an excellent way to learn more. Is it a geographic center point based upon zip code, or some other way of segmenting geographic regions, Would it be influenced by political boundaries such as lines between states, cities, neighborhoods? Sometimes a centroid appears to be located at a prominent place, such as a capital building. If distance is one of the key elements of local search, as noted in the video I linked to above, what is it a distance from?
5. Another area that you didn't delve into much in this research, but which I think is a key element is the consistency of geographically related information across citations, including the names of businesses. If a business has multiple trade (or dba) names, and is referred to differently across the Web, that may harm how well it ranks because Google may not associate the same names with the same business at a particular location.
Running statistical correlations between things like whether the URL for a business includes the business name, or keywords associated with a local search query may miss the point of local search. It might be a good idea to try to learn from some of these smaller inquiries to get a better sense of how local search actually works.
Good points! Regarding the centroid, I think you are making this much too complicated. I think it is just the result of a Google Geocoding API query for that particular locality, i.e. what comes up when you enter just the city name into Google Maps.
Thanks.
I'm not a fan of making things more complicated than they are, but there are plenty of times that we self impose limitations upon our explorations of topics by making simple assumptions that fail to account for a wide range of factors and situations. Since this post is about trying to understand local search through some solid tests and looking at correlations, it might not be a bad idea to put aside the simple answers, and look at things that might be hinted at both in patents and whitepapers from the search engines, and in observed behavior from places like Google.
I've seen a number of different definitions and interpretations of a center point or centroid within whitepapers and patents from the search engines. It could be a geographic center point amongst a set of returned results, or the center point of a map that's being displayed when one performs a search, or a prominent location (such as a capital building or some landmark - a search for "restaurants near the space needle," for example). It could be based upon latitude and longitude information, or in areas where that measure is illegal (such as China), upon a series of grids.
Is the centroid of Manhattan located in Central Park? Maybe it is, and maybe it isn't. When I search for [pizza manhattan] it might be. When I search for [car mechanics manhattan], it might not be. Why not?
I just wanted to say that SEOmoz's recent increase in local seo coverage has been noticed and greatly appreciated. Thank you.
This is great!
What I love about your guys is that you don't talk SEO bullshit, but you backup everything you say with data, or at least with a testable hypothesis. When I left Behavioural Science Research and started to do Online Consulting I thought that people working with computers would always be of an empirical mind. What a mistake. Having access to the tools and data doesn't mean you understand the stats.
I am happy I had advanced stats at university - so I did actually took a dive into the data. It's a long time that I have used SPSS, but it was great to do all the analysis for myself (I mean - pretty easy if you are pointed to the right tests to do).
So nerdy as I am - I was delighted to read in the rapport that there was actually a note of the pre-assumption of ANOVA (normal distribution of data), which was not given in that particular dataset. Gee...never thought a remark like this could actually make me happy (-: - good stats is like art...
Thanks so much for doing what you are doing. It's not only the data & information (=the WHAT) you give us that is great for the SEO community - to me it's also the way (=the HOW) and the motivation behind it (=the WHY) that sets an example for everyone.
Guess I need to get a Pro account afterall...
Fascinating post. I was wondering about the strength of keywords in the listing’s title, with David Mihm’s study hinting that it was a little bit ‘grey hat’ to do so,
“This is against the guidelines but still seems to have a positive effect for the time being. I don't see this method working through 2010-2011.”
Seems it's still very much 'working'.
Martin, Very good point. I would advise (as does Google) against using anything but your business name in the business name field on your Places Page.
I find that keywords in the listing helps, but i would suggest not making it obvious.Many business names have the keywords in the title,
Joe's Electrical Appliances
or
Joe's Washing machines and refrigerators.
both could be a business name
Great stuff, but without measuring the consistency of information across citation sources you are missing a big variable. Yesterday at ILM Carter Maslan said the 3 main factors for ranking are Relevance, Distance and Prominence. Prominence inferred some combination of links & citations and if your citations are not consistent, it may (and usually does) screw with your Prominence score.
Great data still. Thanks for sharing.
I love that you conduct this research and share the results! I would also like to see a follow up study factoring in those few items you didn't get to measure in this report. I can't wait to discuss this with my team tomorrow.
+1 want to see the report how "Proximity to centroid" and "Consistency of listings" can affect rankings in local results.
P.S. Rand, you have a misprint in "Statistical SignifiGance" (#2 headline in first listing of the article). I thought this way, because then you wrote "statistical signifiCance". But maybe I'm wrong, English isn't my native.
P.S. If it's was helpfull please THUMB UP! Thanx :)
Thank you for the insight datas - that's a piece more from the puzzle of local ranking factors.
Fascinating stuff, but where I'm really interested is where Places are appearing in very high-volume, national terms:
My personal guess is nationwide sites trying to rank individually on city-targeted pages don't perform as well as local-only results in general and this could cause that biasing, but we don't have evidence to prove that theory and other explanations are certainly possible.
This quote injtrigues me because here in Ireland we have places appearing for terms such as broadband. Now, personally, I see no reason why someone needs to know where the head office of a broadband supplier is, and it's unlikely titles will have any local data, which opens up an entirely new set of questions.
Fascinating stuff, but where I'm really interested is where Places are appearing in very high-volume, national terms
Agree. We have local brick and mortar Places listings appearing in big generic queries such as "auto insurance" with the big national brand and URL they are affiliates of. In this screenshot local agent Frank's phone number appears as the place, but with the homepage for Geico:
https://img.skitch.com/20101209-tyguc8d4bajc5ic4eyjw1ht3nk.jpg
Looks like they drove that places listing from their national site with a custom page for Frank:
https://www.geico.com/local/ffortunato/
Then throw a few links at that local page for good measure:
https://www.superskippers.org/our_sponsors.html
Just seen this, great work.
Interesting research. Using the location and business keyword in title makes sense. Thanks for sharing.
Exact domain name and/or exact company name match is also a factor I think.
Rand, thank you for conducting this study and sharing your findings. I now envision thousands of commercial real estate SEOs having centroid proximity panic attacks. Instead of "location, location, location", the new real estate manta becomes "centroid proximity, centroid proximity, centroid proximity!"
One more thing - the whole keyword in title deal may be thrown off by fake address SPAM as those bizzes who are using this technique successfully are also likely kewyord spamming their biz name/title.
Hi Rand,
One problem is, that the new google places guidelines do not allow to use Keywords such as *city* or *business type* as business title. Nevertheless it seems, that those new harsh rules do mostly get applied only to new listings in google places. As a conslusion i would say, that we should have a closer look at specifically this ranking factor, and if the weight actually given to it, will diminish in near future... otherwise local SEOs are divided in a 2-class society: Those handling older listings, and those handling newer ones...
Thx for that very interesting analysis btw!
Love the data Rand, stellar as always. I'm glad you guys reached out to Dr. Peters as well. Was playing around with real estate results a while back and was seeing the same type of stuff, but definitely didn't look on this scale.
Hope you guys keep raising the bar with research and data.
There's some brilliant info here. I was interested to see that there may be a negative effect for URLs with city references. Do you think the level of negativity may change depending on where the reference occurs. I have a client who uses the city name in their TLD in a completely non spammy way so would hope this wouldn't dent them too much, or hopefully less than someone who had a sub folder part of the URL include a reference to the city. There certainly seems to be a lot of data and time spent on local lately. Really good to see.
Great post and discussion. The majority of my customers with a mature local/places listing have seen the benefits in their SERPs and traffic and I can see that Google's Hotspot has made a big difference more recently. I am in the process of writing a blog on Google's Hotspot which I'll post early next week with a link within these comments.
will the hotspots change with the changes to local layout
Does anyone have any numbers -- or even strong suspicions -- about the importance of consistent addresses on rankings? Say you're running a tour company that starts tours from many different places in a city but none of them start from the office address that's listed on your entry with Google. Will listing all those tours on Zvents, with all their different addresses, either confuse Google into ranking you lower or make Google think you're actively shady about your address, even though you're not being shady at all? (Yes, you're supposed to put a service area rather than an address on your listing if you don't do business from your office. Trouble is, actually doing that gets you demoted to oblivion. All tour companies list their offices even though none of them run tours from there.)
Sorry to repeat. I thought my click hadn't registered.
It may lower Google's confidence in the location of the business for purposes of ranking that business in a specific location.
I can point you at some of Google's patent filings that may be relevant:
We worked with a fishing guide that had the same problem. It was really funny to see 8 fishermen using the dock they launch off instead of their business address. I think google is less likely to obliterate industries where it would be difficult to cut people completely off with new ranking algo. Look at porn, pills, and casino. I don't have particular experience in them, but it is industry norm to use link tactics considered terrible everywhere else on the web. Since all of them do it, Google probably doesnt really see a reason for penalizing. (Someone who has worked in those niches may discredit my theory, but it is more of a hunch than statement of fact)
Does anyone have any numbers -- or even strong suspicions -- about the importance of consistent addresses on rankings? Say you're running a tour company that starts tours from many different places in a city but none of them start from the office address that's listed on your entry with Google. Will listing all those tours on Zvents, with all their different addresses, either confuse Google into ranking you lower or make Google think you're actively shady about your address, even though you're not being shady at all? (Yes, you're supposed to put a service area rather than an address on your listing if you don't do business from your office. Trouble is, actually doing that gets you demoted to oblivion. All tour companies list their offices even though none of them run tours from there.)
Excellent research. Thanks for taking the time to gather all the data, interpret it and share it with all of us.
Great post. I have found that the correlation between Listing Title and page title tag to be strong. I was surprised to see that having the city in the domain was hurting performance. Great insights.
We have about 20 clients that we have worked with over the last 6 months for local rankings and the one I am definitely not surprised about is the related maps. I was sad to see it in your report as that has kind of been our secret sauce. We left if our of a few of our cheaper clients results to do a comparative test (of course not as well documented as SEOmoz). We were surprised to see a strong trend in the difference of their ranking.
Tip: It is so easy to create a map for just about any business. If its weight loss, do great running trails in the area and plop in on your clients site. If it is a pharmaceutical lab, create a list of all of them in the country. Use you imagination, but don't forget the maps.
A great study I would like to see on this topic is the traffic difference in the new local mixed listing from keyword stuffing for local relevancy vs. a slightly lower ranking and a great call to action title
I apologize if I completely missed the boat on this reference, but what does "# of related maps" refer to on the Google Places page?
Any help would be appreciated :)
The number of related Map's in Google Maps and other Mapping Databased for instance GPS Nav systems in cars.
Which makes reference to the amount of people who may save a map of their favoraite places in Goolge Maps.
These factors seem to be changing regularly as Google steps up its places algorithm to ensure spammers can't exploit them. However, elements such as location, aspects of keywords and an equivalent of link building/page rank related to citations and inbound links will always be relevant to some extent.
We've noted that regular updates to the listing seem to promote re-indexing in much the same way as search submissions or crawl rate.
Hope that helps
e-mphasis Internet Marketing
https://www.e-mphasis.com/
Google map information I have been successfully reviewed by the administrator, but by keyword search results page, turn to the last time (page 12) could not find our information, request expert help, thank you!
E-mail:ChinaSeoBoy#gmail.com
This type of research is important because it will make it easyer for business owners to advertise the products and services that they provide. And that's what every business owner wants an easy way make their business known. For example e-marketing for injury lawyers like:
https://new-mexico.williammcbride.com/albuquerque/
Great Data. Thank you.
This data makes sense from my personal experience. The geo in url is an odd variable.
As a business owner who does his own SEO I think the challenge within the google places concept is freedom. If I am running an auto shop in Brooklyn called "Brooklyn Auto" and my url is brooklynauto.url
It's a frustrating concept to a business owner.
In 1998 B.G. (Before Google) [And I am claiming the origional trademark right here on my B.G. Acronym (c)] I had to advertise in th ephone book and I still do as this places technology is too immature to abandon the tried and true Yellow Pages entirely. It does still work. Full Page.
It would be interesting if google used the same rules as the print YP. For example.
Senority and Ad Size determine "rank". Translated to SEO speak that could be Domain Age and SERP Ranking factors for the Google Places Algo.
Although I guess that is what PPC does?
I by no means am bashing google maps or whatever. I tell every business owner I know they are missing the boat if they are not gearing up for a revolution in regards to how people find EVERYTHING. And had better get their butts in google ASAP, But I still have a full page in the dominant print YP in my market.
Regards,
Panda Skinner
Thanks for the information, it's really very helpful. Great work.
Just finding this post now (Nov 2012) as I've not worked with Google local search until today. Not sure if posts this old are still monitored but if so, an updated version of this study may be pretty handy.
Be advised, while putting the city keyword in the title may give you a short-term boost, it is a violation of the Google Places Policies and it will hurt your listing rank in the long run. According to them:
Business Name: Represent your business exactly as it appears in the offline world.
So, unless the "real world" business name contains the city keyword, don't put it in the Google Places listing name.
Interesting data. It will probably take me a few re-reads to get a solid understanding of all of this. I do have an observation to note from my experience. I noticed that the report concludes that city is not a factor in local results, meaning that whether your business is in Denver or Miami, the same local search criteria exists for local search. I've found some results that seem contrary to that. I'm helping a client rank for the term "blepharoplasty Provo". Right now no local search results exist for that search. However, if you search for "blepharoplasty New York", Google shows local listings. The citations don't seem to be any more authoritative for the New York results on the whole. In this case it seems like Google needs to be "taught" through some means that "blepharoplasty" is a local-targeted search in Provo just as it is in New York.
If anyone has any feedback on this scenario, I'd be glad to read it.
Rand, thank you for conducting this study and sharing your findings. I just love the way you use a data-centric approach and then share not only the findings, but also the data, with the industry. Any chance you plan on adding sentiment analysis to the reviews data?
Fantastic analysis, Rand & co.
For any non-pro members, might be worth pointing out that Rand & Danny went over most of this and much more in their webinar 2 weeks back. Yet another reason to go Pro.
I +1 David Mihm’s suggestion to see if ranking factors vary depending upon amount of competition. Would also be great to test
Links to Places page
Some of Bill Slawski’s suggestions, such as determining the most important characteristics of citations.
In addition Rand, I know you & Ben briefly went over the disadvantages to trying to isolate & test one ranking factor at a time, to be able to quantify it's importance, and I agree that it might be difficult & sometimes inconclusive, but I still believe there is value to this approach. For example, it should be possible to find two Places listings that have not changed much in 6 months and then sign one up with UBL and closely monitor any divergence.
Something I would love to figure out (and that I am struggling with my overseas workers to implement) is the degree of difference between searches set to a local area by using Google’s “Search from different location” option on the left of their SERPs, versus a local person just typing in their keyword. In other words, to see if a different IP makes a difference, even after we’ve set the search to that local city.
On this topic, sure would be nice if there was a reliable program/tool that would rankcheck the results from the client's actual location(assuming it's in a different city than the SEO's office). We are doing it manually now, which is very laborious.
Any Seattle-based SEOs who’ve read down this far might consider making the Seattle SEOs meetup. It’s tonight, but we’ll have another in a month.
https://biznik.com/events/seo-search-engine-meetup--2
Carl
<!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } --> Great Read....and Great Statistical Info! But I still can't shake the image of someone giggling hysterically in a darkened cubicle somewhere in the GooglePlex as they observe all of us trying to make some sense out of all of the inconsistencies in the Local SERPs that have been thrust upon us since late November.
The more I delve into all of this the more confused I get. The more tests I try to run to make any sense out of what are and what aren't significant ranking factors to try to incorporate for "new" clients, the more inconsistent the results are.
Small towns, large cities, near to centroid, far from centroid, reviews, no reviews, Google only reviews, 3rd party reviews, number of citations, no citations, with websites without websites, high pr, low pr, aged site, new site, no site, claimed lbl, not claimed lbl, search terms in url, no search terms in url, etc etc etc.
And then there are the "layouts", 2 packs, 3 packs, 7 packs, and no packs, depending on what and where you are searching for. Displayed Organic/Local SERPS, don't present the same results as Places, which then sometimes differ from Maps results. Telling Google you have "changed" locations offer yet another set of results in all three formats. Oh yeah, and there are the SmartPhone mobile maps search results, which change with your lat/long location. Oh and don't bother checking in at Bing, you will find a whole new set of results, not important enough to get a mention at Google.
It all just deserves a good "Sigh".................Sighhhhhhhhhh ........ There, I feel better now.... :).
Now back to more analysis, I'm sure it will all start making some kind of sense soon.......
I'm surprised that some of these listings actually get away with those titles by "adding extraneous keywords or a description."
The algorithm is not the same for all cities, my client has stores in Miami, NY and LA, I have done the same thing to rank on Google places in these 3 cities, but the results are no the same. I got excelente results in Miami and NY but in LA the results are not good.
Excellent experiment, I'm looking forward to seeing the positive impact for my clients.
It makes sense why you were unable to test city centroid but I'd be really interested to see the results. I work for large brand that has hundreds of offices throughout the country and needless to say, we saw a lot of changes with the new places/organic mash up.
After the dust has settled, I can say with confidence that the biggest places ranking factor for us by far seems to be proximity to city centroid.
Hi Rand,
thanks for the work, especially for the raw data!
However I'm wondering what was so difficult about the centroid correlation. Actually, this seems like a fairly easy to collect datapoint to me:
You can get the GPS location of any place by matching on "&ll=xx.yyyyy,xx.yyyy" in the html source of each place page. The centroid of each of the 20 cities can be collected manually. Last piece of the puzzle is to calculate the bee-line distance between the centroid and the place. There are implementations for virtually every programming language, just search for something like "distance gps [programming language]".
I would have gathered that data myself but I got a bit discouraged when I found out that the URLs of the actual place pages are not included in the xls! Maybe you can provide them!? You can essentially cut everything from the URL except the unique "cid" parameter for each place page.
Greets
Fabian
Negative correlation for the city keyword in the URL eh? Interesting...
This is a very interesting set of data, but the thing that sticks out to me is that it might not be an apples to apples comparison. Unless I'm misreading things, this research was to assist with the Google Places ranking algorithm and to explain the correlations between the many different factors and features pertaining to the website itself, the links pointing to the website and the Google Places profile, but the results analyzed were not all Google Place results. For example, in the query for Albuquerque Attorneys (https://www.google.com/search?q=Albuquerque+Attorneys&gl=us&pws=0), the first 3 and last 5 results are organic results and not Google Places results.
Specifically, were the organic results in the data collected included in the results for the "Places Listings, Ratings + Review Factors" portion, for example? At the same time, Albuquerque Attorneys returns a SERP with the new blended results, while a search for Austin Restaurants (https://www.google.com/search?q=Austin+Restaurants&gl=us&pws=0) returns a SERP with more of a traditional 7-pack with 10 organic results after that. The ranking algorithm for the 7-pack has stayed fairly consistent since the blended results have come into play, but the blended side has definitely shaken up a bit.
While the data is still very valuable, could some light be shed on these types of variables?
If you download and read the full report, you can see that we looked at both Places + Web results together, as well as independently (which is what the bar charts above represent).
Great, I will have to go through the full report more thoroughly then! What about the notion of results differing based on the type of results page - i.e. 7-packs vs. blended results? A comparison of the two types could be beneficial as well since the algorithm may differ per industry.
After 3 sponsored results, I get the 7-pack and then 10 organic results for https://www.google.com/search?q=Albuquerque+Attorneys&gl=us&pws=0
In the first two charts, what does "Linking RDs to domain" mean? What does "RDs" stand for?
Thanks for this very very great article! I just love the way you use the mathematical/scientific approach! SEO without analytics is just a hobby!
Did anyone reply to this? What ARE "linking RDs's?"
probably 'root domains'
Really interesting article, I love to see the results of SEOMoz tests.
I'm also really interested to see the future tests and see how the missing identified elements can play on the rankings.
Thanks for making my job easier SEOMoz!
This correlates with everything I've seen from my clients.
Really useful article, this. Cheers!
Great Work Rand, These research got us into more details about working of working of Google places ranking which I think is going to next big thing in short period of time.
Also I have found that Domain Age & On-page SEO factors have considerable effects on Google Places ranking. Now that Local & Normal SEO Results are merged, SEO for website has became an important part in ranking in Google Places as well.
Great report Rand!
I wonder how this would differ when looking at a totally different market? I suspect that some of these metrics will have a higher correlation when looking at a smaller market like Sweden for instance.
One thing we have also noticed in Sweden, but I guess it is the same all over, is that if you have a Places listing (one of those bigger ones, not the compact one) higher up in the SERP than your web result listing but still on the same page, the latter will not show in the SERP.
We had one example with a site ranking at number one in the web results for "keyword city", and after optimizing the Places listing and gaining the number 2 spot among the places listings, the web result listing disappeared.
Has any one seen anything like that before?
Yes i have, when listed in the merged results the organic listing drops back a page. The traditional 2 line results dont seem to affect.I have noticed than sites that have many listings in local directories with address seem to rank better in local search.
Interesting research. It's good to see that SEOs' perceptions of local search appear to be in line with the actual algorithm.It's also good to see an external verification of the data. Are there any plans to increase the size of the dataset? It looks like that could be a useful thing to do given the non-normal distribution of the residuals. Even more interesting, though, would be if it didn't, but I think that's rather unlikely.
Also, how did you approach this? Did the SEOMoz stats team and Dr. Peters work on it at the same time and independently, or did he do the number crunching and the SEOMoz team verify it?
Paris + Evgeni gathered the data, we did some of the broad and narrow question asking and Matt did the analysis on the data to help answer those questions. In terms of verifying his accuracy, that's why we publish, so others can review and ID potential areas of error or suggesting new things to look at next time.
A good idea - peer review by a community, not just two people.
I think the interesting point that may lead to doubt in these results is the Shapiro-Wilk test not finding normally distributed residuals. Are you likely to be running the same experiment with a larger dataset in the future to see what effect this has? Also, given the conclusion that it is higly unlikely that page position is linear, are there plans to increase the scope of analysis to include non-linear models if there is an experiment on an expanded dataset?
Great research and great base for further ones. I am especially interested in any possible update about the centroid factor. In fact, in a country like Italy where exists big cities (Milan, Rome, Naples...) but there are millions of people and business in small towns (Siena, Lodi, Viterbo) with a lot of "dispersion", the centroid factor can be a real problem. Take, for instance, an b&b in the countryside of Siena but not administrativelly part of Siena... For it the centroid factor is a penalty start because it can't actually rank well for "b&b Siena".
I'm struggling with this at the moment. I have a client in a small town literally a few miles outside a large metro area. She wants to rank in the metro area as that area dominates search.
I've got a couple tactics to try however :)
Am I correct in assuming that these correlations are only for sites/businesses already ranking in the results, meaning to actually get in the 'top 7/10' listings there might be a different set of correlations that needs to be explored?
Yes - definitely. We're looking here at results that showed up on page 1 and what made them higher vs. lower in those results. Comparing that to businesses that don't appear at all in the SERPs (but are listed in Maps), might be very interesting.
This report confirms that web references - even if they don't provide a link - add a great deal of value to your local listing. Particularly if they have information that corresponds to Google local info (address, phone, etc). So signing up for listing directories and review sites - even if their is no link value - still helps immensely with Local ranking. This is a very interesting take away.
Another topic not mentioned is whether or not Geographic information files or meta data could assist further. The XML Sitemaps protocol with Google states that they will also read KML files.
Interesting point you make here Doug. Have you seen result after optimizing both the geo sitemap XML file and and the KML file? How weight have you see it put on listings in Places? Has anyone conducted research on this?
Very interesting study, which would help us a lot in understanding how Google Place function. Furthermore, all the data were clearly exposed, the methodology is really exceptionnal and based on maths. Very good job guys.
On domains with city names having a negative correlation factor.... Sampling of many domains with city names would probably lead one to find lower authority and lower trust on these domains. How do we determine if the negative correlation is a factor of the city name in the domain and not caused by an authority factor?
Great data Rand! Thanks... the more info we can get about Google local/places the better right now. Keep it coming!
Rand,
The keyword usage factors chart was jaw dropping for me to say the least. We have been working on a project for a limousine company in Dallas. As with the majority of luxury services it is highly competitive space. One strategy we used was to create an exact match domain with the services keyword and city keyword “limousine service in Dallas”. The site has aged links pointing to it in the same industry space and it still on page 10 of the SERPS…
You thought on “nationwide sites trying to rank individually on city-targeted pages don't perform as well as local-only results in general” is dead on with respect to our own research. My head is spinning with the possibility of taking out the city in the domain to test where it ranks for location queries
Interesting that overall domain authority would have an impact rather than page-specific details. So would this mean if I were a company with multiple locations throughout multiple metro areas that I would be better off executing a subdomain-specific campaign? Ie. indiana.myproduct.com, illinois.myproduct.com, alaska.myproduct.com instead of myproduct.com/indiana or /alaska or /illinois?
Great research guys!
Rand, I have to think that Google is trying to keep the national directories out of the local results--it's not typically a great user experience (I want to find a dentist near me, not a list of all the dentists in my city, for example), and kind of competes with the local results themselves. I'd bet that Google is looking for the city name in the URL (and not the domain) and really using that as a negative ranking factor. I'll bet that this is NOT an outlier!
I do find the negative correlation of the place name in the URL very interesting - and not what I would have expected. Also the use of keywords in the title, I believed went against Google's criteria of putting your business name as it appears everywhere else.
Some interesting things to digest here and thank you very much for sharing.
Great info! Its nice to have some data to backup assumptions about how the local algorithm works. The webinar on this stuff was fantastic as well - I definitely recommend all the pro members check out the recording.
Thanks again for this phenomenal research! The webinar was one of the most useful I've attended and I'm hard at work implementing the recommendations for all my locally-based clients as we speak.
- Evan
Can we rank with a single a/c. of different city for a different location....
as if i m having an office setup in manhattan and want to rank my google place a/c for NYC....
Plz help
This is alot of data. I have read it twice and am amazed at all of it. Wow! This helps, thank you so much.
That's the best I read about local search. 2011 is going to be the local search year, and this report will be of great use for me. Well done!