I love SearchEngineJournal's Annual Awards. I think it's terrific that even a small community like search marketing can have its own mini-version of the Oscars each year :) It's fun, it builds friendly competition, and it inspires those of us who compete to work harder and earn our keep.
However, this year I noticed some particular problems that plague many web surveys and figured it would be worthwhile to point them out. The following are some important guidelines to keep in mind while designing web-based surveys and contests.
Use a Definitive System to Establish Nominations
Some complaints at the SEJ awards centered around the nomination process, which consisted of comments posted to a blog entry. This can be avoided a number of ways, so long as a systemic, established process is worked out. For example, when Jane puts together the Web 2.0 Awards, she accepts 3-500 nominations, then runs through a few dozen lists of "Web 2.0" sites and IDs those that have an established presence, a certain level of popularity, and fit the criteria.
My suggestion for SEJ might be to attempt to find all blogs that fulfill certain category-specific criteria, whether that be topical focus, subscriber size, PageRank, monthly visits, etc. SEJ could, for example set the bar for "best link building blog" to be a blog that:
- Produced at least 3 posts in each of the 12 months of 2007
- At least 30% of all blog posts were on the specific subject of link building
- Has in excess of 100 blog subscribers (according to Google or Bloglines subscriber numbers)
- Has no fewer than 5,000 external links according to Yahoo! Site Explorer (or a homepage PageRank of 4/10)
These aren't perfect criteria (just examples), but they at least create standards that would give the nomination process a more fair and even distribution. Applying this same type of systemic control to nominations for any awards or survey will produce better results in the end (and certainly end much of the complaining that plagues this type of content on the web).
Don't Ask Partisan Fans to Rate on a Sliding Scale
This was almost certainly the SEJ Awards' biggest mistake. In any kind of survey environment that asks for popularity ratings and offers an incentive for inaccuracy (favoring one blog or site over all others), the use of a sliding scale voting system is going to produce badly skewed results.
Here's an example of how SEJ's Awards were laid out:
In the above sample (which I've re-created from memory, as the survey itself is no longer accessible), I've illustrated how the survey was laid out. Although participants could leave any line blank (if, for example, they had never read that blog), this wasn't clear in the initial instructions and did end up causing some confusion.
As you might imagine, this system creates the antithesis of a positive rating system because of how partisan voters will contribute. If, for example, I wrote a post on SEOmoz asking our readers to vote for us at the awards, you might expect that rabid SEOmoz fans would see how the survey is constructed and rate SEOmoz a "5" and give all the others a "1" to help boost our chances of winning while simultaneously damaging everyone else (I've illustrated this using TropicalSEO as an example).
In the blog post on the subject of the "best SEO blog", for example, you'll see that 55 voters gave SEOmoz a score of "1," 47 gave that score to SEOBook, and 27 gave a "1" to SEO By the Sea. I have a hard time believing that this many people truly felt that these sites were of such low quality (particularly SEOBook, which is consistently excellent). The more likely scenario is the one I've described above, where partisan voters wanted to help the blogs they cared about through any means possible.
As a survey designer you cannot throw up your hands and simply say "Well, the Internet's full of @ssholes." You have to become smarter than the partisan voters and create a system that finds the signal amongst the politics. A good move for this particular survey would have been to use a ranking order - forcing users to rank the blog listed in order from most to least favorite. With a system like this, little room is left to negatively influence the results:
In the example above, the options should ideally be randomized for each different visitor. Participants then fill in the red text areas themselves, ordering the sites from 1-8, which prevents the high-low partisan voting problem presented above.
Craft Clear, Concise, Unimpeachably Exact Questions
This is probably the hardest thing to do when creating a survey (as SEOmoz certainly learned during our SEO Quiz process). Nearly every question is going to have some room for interpretation, but by taking care and using an unhealthy degree of paranoia about potential interpretation problems, you can prevent squabbles like those taking place at Sphinn and SEJ.
For that specific example, rather than saying "Who is the Most Giving Search Blogger," I might seek to involve the criteria Loren noted into the question itself, perhaps crafting something like "Which of the Following Bloggers Provided the Most Overall Value in Posts through Research, Influence, Coverage, and Openness?"
Questions, in general, should also be goal-oriented, so if the goal is to discover which blogger is most popular, the question should be framed in that way. If the goal is to find out which blogger voters think provides the best content quality overall, then a different approach (and a different question) is needed.
Don't Declare a Winner with Tiny Margins
The number of survey participants will dictate your margin of error, and in a small survey (with less than a thousand total voters), it's a given that a substantive margin of error will exist. Thus, unless you're considering the survey participants to truly be the entire universe of judges on the subject (which some contests, like the AP News College Sports Polls or the Oscars, in fact do), I would be hesitant to declare a singular winner unless you have stats showing a victory by well beyond the margin of error.
For example, In the SEJournal awards, I was given the award for "most giving blogger". While I certainly appreciate the sentiment, when I look at the voting and see that 2 other bloggers had 4 and 5 fewer votes than myself, I'd probably suggest a shared title between the top three candidates (Danny Sullivan, Barry Schwartz, & myself).
Be Wary of Referral Sources & Biasing
Online survey software needs to be savvy, needs to track referrals, and needs to map them to entries. While I strongly suspect that the voting at the SEJournal awards was actually fairly balanced, when you're building a web-based survey, being able to pull out data showing the skews based on referral source is incredibly valuable. If I were running the SEJournal awards, I think one of the most interesting numbers to see would be the votes of non-partisan referrers (e.g., those voters whose referral source to the blog post or voting page did not include any of the mentioned websites). Comparing that data to the final results might show some fairly serious skewing that one could systematically remove (by not counting votes in categories where the referring site was nominated, for example). After all, in a perfect world, the awards shouldn't be a measure of who can get the highest numbers of their readers to vote for them, but an actual measure of what the average industry insider thinks is best.
Now a sharp rebuke of myself. Posting something like this after the survey's already complete is easy and it's even somewhat reprehensible. After all, if I really knew all this ahead of time, shouldn't I have alerted Loren and the SEJournal crew when the survey first launched? As is clear from this post, he responds to and accepts criticism quite well! Shame on me for my late timing. I do apologize for that. Nonetheless, I hope it's still valuable and interesting and will help everyone who's working in the realm of survey design think carefully about the process.
ADDENDUM: SEOmoz is (no surprise) launching its own survey of search marketing industry demographics (not an awards or contest) next week. Hopefully, we can take some of our own advice to heart! I've personally been working with a professional survey design company over the last month learning tons of interesting things about the process (and please realize that what I'm sharing here is only the tip of iceberg when it comes to survey design). In fact, I think the following resources might provide even greater insight for survey crafters:
- Questionairre Design & Survey Sampling - Professor Hossein Arsham from the Univ. of Baltimore offers insight into survey crafting and interpretation techniques.
- Writing Good Survey Questions: Examples - from the Berman Blog, some great advice on crafting good survey questions to minimize biases and errors.
- Violin Duel a Draw for Antique Stradivarius - although it's not a web survey, note the great care taken to produce solid results, testing blind and visible, with trained musicians and amateurs alike. Yet, even with all the evidence, no firm conclusion was drawn due to the proximity of the scores.
BTW - No insult or fault is intended towards Loren Baker, who's generous donation of time organizing and promoting the contest is fantastic (as is his sharing of the data reports, without which this post would have been impossible to write). I'm merely trying to illustrate missteps that I myself have taken in the past, and hope that it can help to bring awareness for the future.
Great points, Rand -- but I think the post you really meant to do was....
"I've won for most giving search blogger. Thanks to all those who supported me!"
You don't need to do this long post on survey methodologies that feels like you're trying to explain why maybe Barry and/or I should have shared your win, like you're almost embarrassed about it or perhaps reacting to those who were pulling for Barry in comments at Sphinn: https://sphinn.com/story/21974. And since I know from our past talks that you have it in your head that the Sphinn community somehow dislikes you and SEOmoz (and I say again, they do NOT), I'd point out that so far, that post at 45 Sphinn, a good sign of support -- plus lots of comments praising you, as well.
For me, in time honored awards tradition, I was honored just to be nominated. And in retrospect, I probably would have asked Loren to drop me since it's my job to be blogging as I do. Yep, I work really hard at it -- but it's not like it's my side gig as with someone like Bill Slawski.
Like I said, lots of great points on survey design, but smile and take your award and let you community know about it more directly, Rand. They shouldn't have to discover it hidden in the middle of this. One of the other Mozzers should be doing a headline post -- our bossman got a top honor, and we're rightly proud of him.
I'm going to have a rare disagreement with you, Danny.
I'm certainly happy to have been nominated and happy to have received many votes, but I don't, for one minute, think that the results are an accurate representation of how search marketers in general (or EVEN just the people who took that survery) really feel. The survey's flaws itself are what make me question the results, and hence the blog post.
Well, actually, since we're working with that professional survey company on one survey and launching another one next week on the blog, that might actually be the reason for the post - I've got it on my mind.
BTW - For anyone who doesn't yet know - Danny and I will be on Webmaster Radio this Friday talking about the extremely exciting SMX West event in Silicon Valley!
before this gets flame happy - Rand -
you should commence the talk - in private/ its just mroe professional
Agreed! And we are proud!
Rand, thank you for talking about surveys. I used to teach the basics on market research and it has been a while since I design a survey but I always try to learn something when I see one online. What about the NA (not applicable) option? What if there were people that weren't familiar with one of those websites? You need to give that option too because those votes could go to the wrong website. "Less favorite" is not the same as "I have never visited it"
I tend to agree that a N/A option is nearly always a good idea.
As a voter I would feel confused at being asked to vote for something I knew little or nothing about.
I think your suggestion of sharing that award with Danny and Barry puts you beyond the margin of error for 'most giving blogger'! Congratulations on another excellent year of blog action...
I enjoy reading the SEJ awards and this is in no way aimed at this particular annual event, but . . .
There is nothing like producing an industry award for generating lots of relevant, industry based inbound links. 'Best blog' awards are best of all, since the winners (who are likely therefore to link) are going to be great blogs.
With this in mind, fan based voting is likely to provide far more links (from fans trying to drum up support) than any rankings based vote system.
Equally, the selection of nominations should be done with a mind to which of those nominations are likely to provide the most buzz, rather than necessarily following any other criteria.
Again, I am not being cynical of the SEJ awards here, merely suggesting that there is often more to organising a media event of this kind than merely determining the best participants. Awards make for fantastic link-bait.
Did you know that there is actually an Awards for the best Awards? SO wish I could find the link...
(I know this because a former employer won one once; I don't have some sort of awards fetish)
At the risk of sounding cynical, I'm going to agree with Richard and say that these blog awards are more about getting links and buzz than an honest attempt at determining which SEO blogs are really the best.
Not that there's anythign wrong with that; we should just recognize it for what it is.
But now I'm interested in the SEOmoz survey..what kind of data are we going to get? I would much rather have a really infomative and well-executed survey that serves as a valuable resource, than a silly blog popularity contest.
I should stress that I am not suggesting that awards based link-bait is a bad thing, indeed I find it laudable, I just think it is important to remember the motives behind any offering.
Cheers,
Manley
We're planning to collect mostly demographic and profiling information - so that we search marketers can know more about ourselves. The results will be published publicly.
The other, professional level survey we're working on is for marketing research, and unfortunately, those results will be private as we're merely a participant in the process (and thank goodness, because the costs is in 6 figures).
"And what's up with outing private conversations in public? That seems like an odd thing to do..."
Umm, no:
https://www.seomoz.org/blog/answers-to-questions-for-rand-round-2
"In fact, as savvy readers might be aware, one of the best ways to get on top of Sphinn in the last few months is actually to write something negative about SEOmoz :)"
I'm pretty sure you've had a few other comments like that in public, so no, I didn't at all think I was outing a private conversation at all -- IE, saying something you hadn't already said in public. That would be very uncool and wasn't my intention at all.
In terms of the survey, I actually don't assume that the world is so partisan as you assume. I liked that people could rate multiple blogs with 5s or whatever -- and actually, I think lots of people wouldn't just 5 their favorite and diss the rest. From what I saw in the voting in one category, that wasn't even the case. There seemed to be a good spread of votes.
Indeed, in contrast, when you force someone to rank blogs in descending order -- and they don't know some of those blogs -- you've got people guessing (and making bad ones). I think the old Web 2.0 awards last time were like this. There were companies I had no knowledge of, but I'm pretty sure in order to vote, you had to vote anyway. At least with SEJ, you could skip things. But even in your linear system, if you allowed skipping, what if I really thought two blogs were coequal. You're forcing me to make a quality judgement I might not agree with.
Actually, the bigger issue rather than the sliding scale is number of voters. If you have a big audience and tell them to vote, chances are you get a lot of votes. So the blog with the most votes wins, regardless of quality?
In the past, I used to not mention awards like these at all to my readers, not really wanting to skew the voting (IE, let SEJ's own audience make a decision). But then I learned that if I failed to tell my own audience -- and others told theirs -- I hurt my chances. And you can call awards unrepresentative, but folks still look to them.
You get into this with "referral sources" and biasing, but it's also very, very hard to correct. One way is to only let registered people vote. Perhaps you wipe out obvious skews. But then you're also introducing skews.
Clear questions. Yep. Agreed. Don't declare on tiny margins? Absolutely. When I ran the SEW awards, if things were that close, I'd often declare ties
Definitely, the awards had flaws. For us having won in three categories -- and where I know that we've worked very hard in all those categories -- I guess I don't want to write them off as completely unrepresentative of anything. Several other good blogs won as well.
I guess in the end, Rand, I know that recognition people should get in some awards never comes -- so if it comes in other ways, accept it. You're giving -- many people clearly support that -- you've been a leader in a new generation of getting people to open up and share data and ideas. You can write off the awards, and the comments may help Loren improve, but I won't write off your win.
Danny,
I think you're giving the voters a bit more credit than they deserve:
But you do make the excellent point that if you'd like to rate multiple blogs with the same rating, the ranking system doesn't function as well. I still think it's the better choice in this case, as you'd be protecting against the partisan, "one 5 and the rest 1s" that appeared to take place.
BTW - I think that in at least two of the categories where SELand won, even a very high margin of error wouldn't make it a tie with the other contenders, so you can be fairly sure that you really are the champ :)
Also - My bad on the private conversations issue. I had forgotten the previous reference you pointed to (although I'm not sure I've ever discussed the issue elsewhere in public and hoped that in that singular instance, it would be read as tongue-in-cheek and not seriously).
LOL are you guys talking about Dr. Phil talking to the media about his visit with Britney? https://omg.yahoo.com/spears-family-says-dr.-phil-went-too-far/news/5436 ("And what's up with outing private conversations in public? That seems like an odd thing to do...")
Hey congrats to Rand and Barry and Danny for a great job...you guys all rock!
Very good points. It's always hard to do a survey that can be influenced by who is the most popular and who can push the most amount of traffic. Look at Colbert's method of getting everything named after him.
In general I agree with you with the way the survey and rankings is setup. But you have one flaw in your suggested 'ranking order' variant.
It is possible with SEJ's sliding scale to not rank certain blogs, giving the opportunity to only rank the blogs people actively read and know about.
If a person who only actively reads 5 of the 8 blogs, he/she would have to award those other 3 blogs points without even knowing what to give them. That in turn also gives skewed results.
I agree, all of these surveys are really good, but it's important that they are setup correctly.
Trbl - conceptually, you could also modify my ranking system to exclude any blogs you don't read, and thus only fill in, say rankings 1-5. I believe that would solve the issue you're describing. A very good suggestion, BTW!
Well the REAL way to do that is to give a "Do not read" option for each blog.
I assume we shouldn't also forget about simplicity and usability... Too many options can also confuse...
I generally prefer 1-variant polls - simple and fair. Sure, you can like more than one option but in the end with "ranking" system you still have to give 5 to one option and 4 to another, so to that's pretty much the same...
Trbl, that would have been fine and all, but I wasn't aware that you were alllowed to skip a vote/not vote for something (I don't know how clear that was to voters), so I had to arbitrarily pick a number for some blogs that I wasn't familiar with or don't read, which is exactly what Rand said was a big problem with the voting structure.
I think this contest type is only a popularity contest. Not a quality contest. Of course, some blogs (like this) are quality blogs and popular blogs at the same time... but, in general, it could be very difficult for a quality blog with few readers (because it is relatively new) to become a winner in these contests. It is different to the Oscars, because there, a new director or actor could win if his development is really good, despite if he is not very popular.
Well, if you are new, everything is much more difficult, so quite fair, I guess. "Quality" concept is too hard to define, and it should be evaluated by experts [not by general ananymous public], so the only way is to measure is "popularity" within its niche and I also see nothing wrong with that. If you were clever enought to grow a popular resourse (again, within targeted audience, like Ciaran said recently), you well deserve the award, I guess...
Colour me stupid; I took it that you ranked the blogs in order of which you thought were best, so several (excellent) blogs that got 1s will have done so simply because I didn't think they were as good as the others.
Ah, I see what you're saying! But see, I read the large number of 1s completely different. That happened to SEL, as well -- people voting 1 for us in one category were second only to those voting 5. I scratched my head looking at that yesterday and wondered why, and I (thought) I knew the answer -- many mistakenly thought 1 meant the best score (despite the instructions).
If so, that's an entirely different type of skewing -- and it's more likely to impact the strong sites that would attract high scores rather than those in the mid-range.
But maybe I'm wrong and people are that partisan. I just never thought of it that way. I guess I figured people can just be easily confused especially if you're voting for the number one site, then you might be inclinded to think 1 is best.
Good point! I like how you see the best in people while I see the worst :) I need to start waking up on the other side of the bed, I guess.
In either case, I think we'd all agree that with the amount of skewing and the closeness of the tallies, there's really no way to be sure who actually won (nor even what was measured in the voting).
BTW - What's up with you being the only commenter not using the inline reply feature!? :)
Oh, you mean by inline being that threaded thing. I don't use it because I hate it. I really, really hate it. Hate, hate, hate. Each reply gets harder and harder to read, until it's all smushed up to like two words each. I vote to kill it. Kil, kill, kill. You should do a post about who likes and hates it. A survey!
(indents himself to squish following comments...)
Rand, Danny, et all... As the quiet Associate Editor on SEJ, I just wanted to let everyone know that your comments and criticisms are heard, understood and certainly appreciated.
Working closely with Loren, I can assure you that the goal was to continue the tradition of hosting the awards while creating an easy way for readers to participate. There's always been a strong response to the awards, and we would never want to let our readers down,
In retrospect, there are a lot of things that could have been done better. But, I think that is true of everything we all do. Keeping it short and sweet, I just wanted to pop up, on the record and say that I personally appreciate your willingness (and everyone else who took the time) to share your thoughts with the SEM community. Our only goal is to build a better awards program for 2008, and it's clear we have some great feedack to start with.
Good points Rand, but I have to agree that the ranking is particularly challenging... whichever direction you go, there is going to be a less-than ideal outcome. On one hand you juggle potential overt bias, and on the other hand you deal with unwarranted ratings (ratings based on little to no actual experience with).
But I completely agree with the need to eliminate as much openness to interpretation as possible. This is perhaps one of the more challenging aspects that even some of the most polished, professionally designed surveys still fall short on.
Even more subjective, but it would also be interesting to have "competing" blogs be given a topic, generate their posts, then all post simultaneously so as not to give undue advantage or influence, and then have voting open on those specific posts to include looking at which adds unique discussion, additional value to a discussion, most engaging approach, etc. I see this more for fun and as a way to engage bloggers and the audience as well as possibly cross-pollinating readership... a real blog-off.
Shame on you Rand for crossing the picket line and accepting your award ;-)
Very good points as for ranking scale and question wording. I do agree with you on that. Questions were somewhat misleading and there was definitely a good way to avoid partisan votes.
But to tell you the truth that was also the beauty of the competition [as well as the beauty of most competions], as [1] the ranking "sliding" scale turned the awards also into the fans' competion thus making it fascinating to participate in and [2] the "misleadingness" doesn't matter in most cases: ok, people might have interpreted the questions in different ways, but they were all biased no matter what and in most cases would have voted for their favorite blogger any way. Accept it, voters are no experts [therefore I don't quite agree with your "Oscars" comparison] - voters are people who are biased and your victory is also their victory, so please don't apologize for it and please let's have only one winner!
Yeah, I agree about the Oscar comparison. Every year I sit on the couch cursing at the screen because crap wins like Jennifer Hudson for best supporting actress are allowed to happen.
You bet! Oh, I wish that were we voting....
RE: The all 1's and one 10 thing. You could do it by taking the median, and if only one value is above or below the median, only counting that vote and discarding the others.
The Blog article I have been waiting for. Praise Him! I work at a market research firm and we have to deal with surveys all the time. I can't tell you how stupid some of the marketers at fortune 500 companies are in regards to web surveys. They don't know the difference between a sliding scale and a multiple choice. Idiots. I wish everyone that I work for could read this article. I thought of another good site that that gives some pointers on survey creation too. It's called <a href="https://www.aboutsurveys.com>aboutsurveys</a>. It's actually written by a bunch of Phd's and Scott Smith, who is a stats guru.