Tracking Your Link Prospecting Using Lists in Link Explorer
Tired of tracking your link prospects manually? Learn how to use Link Explorer's tracking lists to keep tabs on your prospects and track links to specific pieces of content. Whether you're doing link outreach, PR, or just keeping tabs on your industry news sources, Link Explorer's list management can make your life easier.
May 29, 2018 21 36
Tracking Your Link Prospecting Using Lists in Link Explorer
Blog Post: May 29, 2018Please note that Link Explorer is entirely new -- with a much larger, fresher link database (an entirely new back-end) -- and replaces Open Site Explorer. We realize that Open Site Explorer's data has been lagging for a long time, but we hope you'll try Link Explorer and see how dramatic the difference is.
That's good to hear -- thanks. We're exploring doing some of these shorter videos (2-3 minutes?) for specific product features. There's so much that's rolled out in the past two years that even our long-term customers miss. It's so easy to overlook a blog post or a couple of tweets, and sometimes you just don't have time to invest an hour or two digging into new product features. Feature discovery has been a big challenge for us.
Great feedback -- thanks, Heather! We're definitely trying to sort out better annotation features, because workflow and communications is such a huge part of link-building campaigns. Love the where-in-the-funnel approach.
How to Write Meta Descriptions in a Constantly Changing World (AKA Google Giveth, Google Taketh Away)
Blog Post: May 16, 2018That's something I wouldn't be comfortable answering off the cuff for any given sight. There are sites who have chosen to leave them all blank and have done perfectly fine. Personally, I still like having some of that control. It depends on the costs, risks, etc., though. Wikipedia doesn't write Meta Descriptions, because they couldn't do it by hand, and so they'd have to auto-generate them. Google's capability to auto-generate is better than most of ours, so if that's your option, then probably leave them blank. Even then, I think there's a hybrid approach where you could leave them blank for some pages (like long-tail product variations), but write them for critical pages.
So, let me just say that I don't think that keywords in your Meta Description are a direct, Capital-R Ranking Factor. However, I think engagement (long clicks, etc.) is very important to ranking, and your Meta Description impacts CTR pretty profoundly in some cases. I just try to use "ranking factor" very carefully. I think it matters quite a bit for SEO, in a broader sense. I also want to be clear that it matters in the sense of driving people's interaction with your site and those signals, not in the sense of trying to fill it with keywords.
Interesting. It's a bit hard to separate, because I'm seeing a solid correlation between SERPs with Featured Snippets and those with display snippets >300 characters. While Featured Snippets can theoretically come from any URL ranking on page 1, they're *much* more likely to come from the top 3 positions. So, I'd expect to see some relationship overall, with top-ranking positions more likely to show extended snippets.
Last time I dug in, we were seeing rewrites on about 50% of display snippets, *but* the challenge is that a lot of rewrites are partial, so it can be tough to tell when Google is using part of the Meta Description, even if it's not an exact match. There are other place sit can be used, like third-party tools and social sites, so it depends a bit on your situation. Some very large sites are dropping them and letting Google do the rewrites, and I expect we're going to see more of that over time. Personally, I still want what control I can get for critical pages, but that control may be less and less as we move forward.
Keep in mind that your Meta Description is, best we know, not a ranking factor in 2018. So, only use multiple keywords if it's natural, descriptive, and likely to generate clicks. You don't want to keyword-stuff it, or Google may consider it low-quality and do a rewrite.
That said, it's a lot easier to naturally mention 2-3 key phrases in 155 characters than it is in a title tag. As I mentioned in the post, try to make sure the critical keywords/concepts get mentioned earlier in the tag.
I think we talked on Twitter a bit. I do think Google may be using a pixel limit for display snippets (we use one in Moz Pro for display titles), but it's a bit tougher to pin down with the multi-line format. I've also just found it confusing for people. We can grasp the ideal of pixel width for a single line, but it gets weird with Meta Descriptions. So, I've opted to stick with a character limit, even knowing it really is an approximation (and the pixel width is more precise).
Good point -- recipe SERPs look a bit more like video SERPs. That thumbnail and the rich snippets cut into the space a bit, best I can tell.
Google isn't supposed to mess with my display snippet until 30 days have passed and comments are closed on the post :)
That first version (which I can also see) is, admittedly, a lousy snippet. In this case, I think it's entirely due to query relevance. Google can't find that phrase "write meta descriptions" in my Meta Description, so they're looking in the text. That's a tricky thing, because you can't write a description that matches every query, and Google is always going to take liberties. I don't think it's necessarily an issue of the description matching the first paragraph.
If I wanted to do keyword research and decide which terms I really cared about, I could put those terms in the Meta Description and try to improve my odds. Most days for most pages, that feels like overkill, but once a page exists for a while and starts to get traffic, it would be interesting to see what it's highest-volume terms are and rewrite the snippet around those terms.
My gut feeling is that, since Google loosely confirmed this and we're seeing it widespread (not at the typical low-% testing level), this is not a test. However, something can go live and still get pulled back. There are definitely no guarantees, which I hope is clear in the post. We're only talking about today, as best we can.
Within a few weeks, I'm going to be trying to claim that the inverted pyramid is useful for buying shoes and making sandwiches, and the zombies of journalists long-gone will come to life and eat me (and probably justifiably).
You'll still see some snippets >300 characters in some situations. I'm not aware of any return to that at scale.
I think that's a great strategy, and if that's how you've approached writing them, I would let them stay 200-250 characters.
I would give it a little time. First, we don't know if this is going to change again. Second, if your descriptions have useful information in the first 150 characters, it's not disastrous to have them cut off. People who are interested will follow the "...". If the first 150 characters is mostly lead-in or marketing company then I honestly don't think that's a great approach for Meta Descriptions. People skim, right or wrong, and i think you have to get the most important information out front.
Content for Answers: The Inverted Pyramid - Whiteboard Friday
Blog Post: April 27, 2018To be honest, I didn't like the "Justice League" movie all that much (it was ok), but I do like the t-shirt :) I'm a casual fan of both universes, but movie-wise have much preferred the Marvel offerings.
That's a great way of thinking about it. Too often, we put too much of our own egos into our content and just assume everyone will want to read it. Getting visitors the information they want quickly is a sign of respect.
You mean in terms of seeing a Featured Snippet at all (vs. competing for an existing one)? I haven't seen a direct relationship with competitiveness, but I definitely think that different industries tend toward different types of questions/content, and some types of questions don't usually generate Featured Snippets. So, I wouldn't be surprised if opportunity is low in some industries, for a variety of reasons. On the flip side, some industries -- like anything in the "How to..." space (home improvement, for example) are almost overrun with them.
Sorry, I can see how those two points might seem contradictory. I think the inverted pyramid is a good structure for answering questions and ranking for Featured Snippets, but I've found that it's not necessary for your answer to be at the top of the page to earn a Featured Snippet. Long-form pages with multiple, related questions have performed very well for us here at Moz.
I think it really does come down to what makes sense for the user. Very important, unique questions should probably have their own pages, but often times I find that a big, umbrella question naturally has follow-up questions and I think it can be good for both visitors and SEOs to combine those follow-ups into one larger piece of content.
The main thing I worry about is people writing hundred of single-paragraph answer pages on one site, because I think that can become thin, low-quality content very quickly.
I think Google is starting to separate SERP features by intent more and more. Featured Snippets are well suited to informational queries, and SERPs with Featured Snippets don't tend to have things like shopping results (PLAs). Queries with commercial intent get very different treatment. I think Google understands that these types of features exist at different stages of the buyer funnel.
In general, I think it's great to provide internal links/anchors for long articles. From a Featured Snippet standpoint, though, it may produce some odd behavior if Google interprets those links as a list. Then again, if that list is a succinct summary of the major points of the article, then that might be fine.
Generally, this kind of long-form content is a bit different than Q&A style content. Here's a post where I broke out links to sub-sections, but it's a different kind of long-form content than what I'd use to target a question query:
https://moz.com/blog/mastering-google-search-opera...
Fair points all around. I don't want to give the impression that the inverted pyramid approach is appropriate for all content from a content marketing or SEO perspective any more than it is from a journalism perspective. I think it's well suited to creating question-and-answer style content that's user-friendly and Google-friendly (especially if you're targeting Featured Snippets). For the inverted pyramid to work, you have to have a succinct premise or unifying idea. That's not always how content is (or should be) structured.
Ah, sorry -- you mean from Google's perspective? Yeah, that's a tough and interesting question. I think there are a couple of forces at play...
(1) Google needs short answers for small screens and voice search. Google Home, Google Assistant and even high-res mobile screens aren't well suited to a desktop SERP. In some cases, mobile-first design has even become voice-first for Google (when voice is appropriate). I think they have to face this reality, *and* realize that it's going to cut into revenue. Not sure there's a simple solution to that problem.
(2) I strongly believe that Google is starting to define intent more strongly and realizes that not every SERP is equally well-suited to ads. What we're seeing is more separation between informational (top of funnel) and commercial (bottom of funnel) queries and SERP features, with Google focusing ads and shopping at the bottom of the funnel and focusing on answers and search experience for the top of the funnel. In some cases, where intent is unclear (or, maybe better to say "mid-funnel"), they're pushing people to refine their searches and move them toward commercial queries that then have ads.
I don't want to say that "easy answers" are necessarily bad content, but the practical reality in 2018 is that Google is gobbling up more and more of them every day. If the answer in that box is 95%+ of what the searcher needs, then why should they click?
It's nice to hear a perspective outside of marketing. In my own experience, I feel like burying the lead is too often an act of ego, in both journalism and marketing. We think that what we have to say is so important that people will just naturally wade through paragraphs of text and our unadulterated brilliance to get at it.
Now, granted, there are articles that take that approach and work. There are brilliant writers out there, no doubt. The day-to-day reality, though, is that attention spans are very limited, the web has only made them shorter, and all of us want to know we're on the right track for the information we need. Most days, that means giving people a structure that's familiar and clear.
We're working on a big CTR curve study now, but it gets complicated fast. On average, SERPs with Featured Snippets see a drop in overall clicks but a boost for snippets where the URL is ranking below #1 (still trying to sort that out). Knowledge Cards show a much larger drop in CTR, which I think is because those almost always have definitive answers.
So, I think it comes down to whether a Featured Snippet answers a question that has a definitive answer (i.e. what's in the box is factual and complete) or a question that requires rich information. Anecdotally, for pages we've optimized for snippets, we've seen solid traffic gains, but these are all questions that generally require more detail.
I specifically refer to the more technical "Featured Snippet" to distinguish these answer boxes from "Knowledge Cards" which are answers that come from the Knowledge Graph and don't have an organic link. We're doing some preliminary analysis, and the good or bad aspect is tricky. Search results with Knowledge Cards see a huge drop in CTR to organic results. Search results with Featured Snippets see some drop, but it seems to vary a lot with the answer and whether it's an easy, definitive answer or one that naturally begs for more information.
We've had very good luck with ranking/traffic on answer-focused pages, as have some others, but we've focused on the kinds of questions that require rich content to answer. That said, there are definitely cases where Google extracting the answer is causing people not to click.
I apologize for over-generalizing -- many penguins go days at a time without robbing a bank.
Faster, Fresher, Better: Announcing Link Explorer, Moz's New Link Building Tool
Blog Post: April 30, 2018Regarding the ranking keywords, we're showing 17,000 -- might've been a temporary bug. You can view more about that data in Keyword Explorer.
The link and authority difference with Ahrefs can be tougher to pin down. Although our new link index is much larger than the old one, we still tend to take a quality-over-quantity approach, and so a big difference in scores could indicate a possible issue with the site's link profile (namely, that there are a lot of links, but many are low quality).
In some cases, it's possible that Ahrefs is seeing links from very specific sites or countries that we're not seeing, although this would suggest that a large number of links are concentrated in a small number of sites.
The other possibility is that there are some temporary or old links that are no longer active.
Looking at the data, some of the site's links are coming from domains that have high DA but the pages themselves are long-tail pages with much lower PA. I'm seeing some article marketing links, for example, where the domain has high authority, but the article/post itself is buried pretty deep.
Of course, we could also be wrong. If there are specific links we're not seeing (especially high quality ones), please let us know. We are definitely looking to improve the index and our DA/PA metrics.
The new data should be live as of this morning in all of our tools, including Mozbar. You shouldn't have to upgrade to the newest version, although we'd certainly encourage you to do so. If you're seeing a mismatch, let us know.
Zero-Result SERPs: Welcome to the Future We Should've Known Was Coming
Blog Post: March 15, 2018@Bill -- I like this. I'm going to claim this is what I meant all along.
I just made that whole "open informational" vs. "closed informational" thing up over the weekend, so now I'm trying to figure out what it actually means.
Right -- you can't take the Medical Knowledge Panel approach to the whole web. It just doesn't scale. The things that are clean/curated don't scale, and the things that scale are messy/imprecise, and Google has to find a bridge.
I'm curious if/when/how (probably just when/how) Google will start to integrated index-based answers (currently, Featured Snippets and PAAs) into the Knowledge Graph. They have to find automated ways to better vet those answers (as you said), and, as they do, they'll have more confidence to make those answers authoritative and permanent. The higher the confidence, the more likely we'll see less answers.
Interesting example, because zip code and area code sites used to be pretty big, and Google has started displacing them as well. Here's a zip code example with a prominent carousel:
https://www.google.com/search?q=zip+codes+in+chica...
Just FYI -- I wrote the Mega-SERP article ;)
I think it's worth noting that these sites suffered huge losses as soon as the answer boxes rolled out. I'm not sure how much is left for them to lose. What it should be is a wake-up call for the rest of us. If your data is easy to get and repackage, you're in danger.
New Research: 35% of Competitive Local Keywords Have Local Pack Ads
Blog Post: February 13, 2018I honestly can't say whether people are being charged for clicks to the KP (beyond Google's statements), and I totally agree that, the way it's set up now, there's really no transparency at all and no way to verify this. Standard AdWords, love it or hate it, feels a lot more transparent than this, and Google needs to sort that out ASAP.
I'm not up on all of the details of how the bidding works, but my sense is that the CPC is tied to the broader keyword, so you if you're paying high amounts normally, you'd still be paying high amounts. The actual charges only come with an engagement (you don't pay for a click to the Google Maps listing).
It's tricky -- if you were to jump in now, you'd have a first-mover advantage, but you've got two things to watch for:
(1) If you already do well in the local pack, you may be paying for no reason or cannibalizing that listing.
(2) If your competitors get wind of it, they're going to start competing with you and drive up prices.
So, do you take the short-term advantage and hope competitors don't notice or wait it out? I'm afraid that's much more a business decision than a technical SEO one.
It's a bit different model -- you don't pay for the click to the Google maps property -- you pay for a handful of different types of engagements after that (click-to-call, website clicks, etc.).
Unfortunately, I have no insight on the European market for local at this point. I'm going to guess it's either much lower prevalence or that the program is US only for now (that's just a guess). I know the local service ads are restricted to US cities right now. The EU regulatory issues definitely have slowed Google down.
Thanks, Dana! Hope they make that UI a bit friendlier. I do think it's interesting that you get charged for the actual post-Google engagements. I can imagine Google using that model for many other properties.
For better or worse, regular/organic pack listings also go to your Google listing, so if you want to compete for position, you'll probably be willing to pay. I think it's mixed with local -- for marketers, it's a huge challenge (we're used to controlling our sites). For local business, though, the Google listing genuinely drives phone calls, foot traffic, etc. It's a consistent format that's easy for search users to parse can be adapted to mobile and voice. The truth is that we're probably going to have to learn to let go of the idea that organic search impressions = traffic, at least for some niches.
The cases with two organic listings seemed to be normal and not reduced by the ad placement. Right now, it looks like the ad is in addition to the regular/organic pack listings.
Unfortunately, since this is currently part of location extensions, I'm not clear how the metrics break out or if you can view them independently (if anyone on the PPC side knows, please speak up). What's interesting is that, since the first click is to Google, you're not billed for that click, so I'd be very curious how the ROI breaks down.
Looking at what's happened with hotels, restaurants, and other specialized local packs, I think we can definitely expect more experimentation and feature launches. If ad engagement is solid and doesn't hurt other metrics Google cares about, we're going to see more ads.
I certainly agree that that's a big challenge for many retailers -- Amazon's dominance just keeps growing. From an SEO standpoint, though, it does seem like Google is localizing more searches and is trying to sort out intent more. So, some queries naturally lead to online retailers and some seem to lead to local retailers, depending on how Google interprets intent. If I were a local retailer, I would re-focus my efforts on those queries with clear local intent and not waste time/money competing on the queries that folks like Amazon dominate.
I think our keyword research is often based on vanity, in a sense -- *I* want to rank on these terms because they're high volume and I think they'll drive traffic. However, we don't stop to look at intent or how Google interprets those queries. We need to be more selective.
I agree that these ads seem to be targeting much more broadly, location-wise, then regular pack results. I expect the ads to get more location specific as more people buy in, but there's definitely two separate algorithms going on. The flip side of this is that, if you're trying to target an area you're not that close to, there may be an opportunity here (within reason, of course).
Google's Walled Garden: Are We Being Pushed Out of Our Own Digital Backyards?
Blog Post: February 27, 2018It's also easier in the 10-blue-link world for Google to just claim they're dispassionately surfacing information and people have multiple options. When they start choosing an answer or presenting their own content, it complete shifts the balance of responsibility for that answer.
That's the challenge right now -- it used to be that algo updates would impact everyone, to some degree. Now, Google's launches are so laser-focused that they impact only a small amount of companies, but for those companies the impact is massive. It leaves everyone wondering when the other shoe is going to drop for their industry. I'm not trying to be conspiratorial, but I hear this fear from businesses every week.
That's a great point -- there is a lot happening around query refinement and Google trying to determine search intent. Recent example is the new links that appear when you bounce back from a search. Trying to figure out what people want from a few words is one of Google's biggest challenges, and they're going to use all of the data at their disposal. If they control the path of that data (and don't send you out to a 3rd-party site), then they can watch the entire process.
I definitely don't want to give the impression that this idea is somehow new and uniquely mine -- I think many smart people (yourself included) have seen it coming. Even as I follow the trends, it's hard sometimes not to just see each change as an isolated event. It's only been the last 2-3 years, when the pace has really accelerated, that the trend has become inescapably obvious to me. I think that's why I finally wrote the post -- because there are just so many examples now that I hope even people outside of search can see the bigger picture.
Not to be cynical, but I do think that the egalitarian days of the early web are over. Google's job is to model the world, and in the world big brands and big money are powerful. If a search for "Apple" returns nothing but enthusiast sites from apple growers, no matter how well written the content is, that's not going to match the expectation of consumers. I don't think we're ever going back to a time, from a search engine standpoint (any search engine), where the playing field is level.
Facebook has almost gotten away with more because we didn't expect as much from them. They were never really a fair playing field. I completely agree, though, that it's becoming more and more important for all us to diversify traffic sources and build our own audiences. It's not easy, but Google and FB hold too much power over out traffic right now.
Thinking out loud, I almost wonder if there's an AdSense model that could be put into place. If Google uses our content (instead of just linking to it), does it merit some kind of micro-payment? The big challenge is that those programs are ultimately opt-in, and Google can't limit themselves to just participants. For answers to work, they need the entire index in play. This is also why they're not waiting for structured data and are using any text available.
The legal aspect has been interesting to follow -- Google does fall back on the idea that we can opt out at any time, which is true in theory. Many have argued, though, that Google's disproportionate share of search (I won't use the M-word) means opting out is a practical non-option now. We've been led down the garden path, and now there's nowhere left to go.
I think the ethical aspect is more challenging. Google got to where they are based on the broader content of the web, and that content fueled everything else. Do they owe the people who provided that content something, or was that paid in full by traffic received so far? If Google went to an all owned-content or paid model, I think (like it or not) they could wipe the slate clean. When they start to extract answers, though, and use them to build content, and that content replaces the original, it gets a lot more dubious.
I think the other worry I have regarding (2) is the end of diversification. When you get back 10 blue links, they may not all be good, but you get choices. You can go to those sites, judge their quality, and go somewhere else if you want to. Look at my Featured Snippet example, though. What if that information is false? I probably won't vet it much, because Google grants that answer authority, and as an end-user I generally trust Google. I have no idea, though, and I could end up (metaphorically?) poisoning my hamster. On desktop/mobile, I've at least got a SERP below it, but on voice this is the only answer I'll get.
I empathize with the challenge, and I get why Google has done this. Medical Knowledge Panels probably came out of issues with bad information in the medical space -- I get that, and I get it's a real challenge. On the other hand, not all information is clearly factual (a problem Google is painfully aware of in the news space). As Google chooses content and creates their own content, they narrow our point of view.
How Long Should Your Meta Description Be?
Blog Post: December 19, 2017There are definitely desktop/mobile differences. Oddly, we're sometimes seeing longer limits on mobile, because Google's more likely to consider scrolling ok. We see a lot of two-line titles in mobile, for example. I tried to pin this down with our display title research, but it turns out to be a lot messier, because the mobile CSS varies with the device (unlike Google's desktop CSS, which is fixed-width).
Honestly, no change is permanent, in the sense that Google may change it again. This is the ever-increasing reality we face. However, I don't think this is just a test -- Google has been testing long descriptions for over two years now, at a smaller scale. I think they hit some level of confidence.
It looks like mobile descriptions have gotten longer, but I'm seeing mixed signals and don't have a good data set on that. Google seems more willing, interestingly, to make mobile results bigger, since scrolling is natural. Mobile results actually have longer titles sometimes in 2017 (due to the two-line wrap).
Personally, I'd ease into this, testing critical pages where longer descriptions would be beneficial, even if the change is permanent. I don't expect this change will roll-back int he short-term, though.
Unfortunately, with the multi-line format, it's really tough to tell. That was tricky even with our title tag study. Google is hinting that it's a character limit, but they've been wary on specifics because I think they want to avoid people obsessing over it (which I understand, and yet I understand why people want a number to work with).
Nice :) Changing the world, one meta tag at a time...
Sorry, just realized you may mean -- why haven't we updated our recommendations on the site and in our tools? We are definitely in progress on that, and it should be soon. Takes a little longer with Site Crawl, since those are more than copy changes.
Interesting -- thanks.
It's slightly counter-intuitive, but Google seems ok with making mobile SERPs longer. Many titles wrap to two lines, for example (and some display titles are longer than desktop). I think it's because scrolling is so much more natural. We have no evidence at this point that Google has changed the number of organic results/page, but anything can happen. It's tougher to measure these days, since SERP features cut into the organic count.
I'm curious -- what was your testing timeline? It looks like the big changes kicked in around November 30.
All good questions, and I think time will tell. For some, critical pages, I think it's worth testing. I wish we had better CTR data for organic results, but GSC at least gives us the basics. One concern is that, if everyone goes for long descriptions and writes terrible ones or keyword-stuffs them, it could devalue all descriptions and harm clicks. So, SEOs, let's not f--- this up ;)
Whoo-hoo! Thanks, Joost.
Google is confirming some changes, but isn't clear on the details. The data strongly suggests this is a full roll-out. Google has been experimenting with longer descriptions for over 2-1/2 years now.
We should absolutely not stuff keywords in them. There's no evidence at all that keywords in your Meta Description help ranking in 2017 (and, in fact, keyword-stuffed metas have been a spam signal on some engines in the past). Descriptions should be written for users and to help drive quality clicks (this is, indirectly, good for SEO, too). If providing more text/info contributes to that, then I think people should try longer descriptions. If not, then leave them alone.
We will be experimenting with them going forward. This post, for example, has a description just under 300 characters, and it's already showing up (uncut) in search results.
It's as good an explanation as any I have :) We'll see if they admit to that, or just quietly change it.
Totally agree on filling the extra space -- this is not an invitation to keyword-stuff descriptions. If more text is useful, I think people should be open to trying longer descriptions. If it isn't, leave them alone.
I disagree regarding this being an experiment, though. Anything can change, of course, but Google has been testing these longer descriptions to some degree for over 2-1/2 years now. They have the data they need, I strongly suspect. This appears to be a full roll-out, and Google at least vaguely confirmed that.
Moz the Monster: Anatomy of an (Averted) Brand Crisis
Blog Post: December 13, 2017There's a "Dr. Pete's" that makes small-batch marinades and sauces, and I used to give them away to clients. They had this coffee balsamic marinade that was amazing, but discontinued it :(
Could be both.
Interesting -- thanks for sharing your story, Dixon. Yeah, we risked some brand confusion when we shifted to just "Moz" a couple of years back. Ultimately, though, that's a business decision first, and often other factors outweigh the SEO implications. You have to be aware of the SEO implications, but these decisions have to go much deeper.
We get confusion every year from Morrissey fans when MozCon starts trending.
On small scale, if they took a spot or two, it's not a big deal. People looking for them would find them and people looking for us would find us, and no one needs 10 spots. The danger/concern is if, due to the massive scope of the campaign, the signals shifted and Google started to interpret "Moz" to mean Moz The Monster and not Moz the company. It's unlikely, but given we're a smallish company and this was a huge ad spend, it's possible. Then, we'd be looking at a significant investment (and probably a substantial PPC campaign on top of it) to re-compete for our brand and just get back to where we already were before all of this.
Truthfully, being a UK brand, it's likely they never heard of us, or they saw mentions of "Moz" in various forms and thought they were all unlikely to cause confusion. Being a one-syllable brand, these things are also bound to happen. Star Wars VII, for example, has a character named Maz which is pronounced exactly like "Moz". Should someone, somewhere do their due diligence? Probably, but naming a character for a movie or ad campaign isn't quite the same as picking the one-and-only-forever name for your brand.
It's a fair question, and a tough one. First, I'd want to isolate the impact. If the SERPs are changing and John Lewis was ranking for a handful of organic results for "Moz" *but* we weren't seeing drops in CTR or traffic, I'd monitor the situation but probably not worry about it too much. It's possible for two brands to co-exist and the people who need to get to each brand to still get there. We're not necessarily going to cannibalize each other. Obviously, if they were a competitor in some way, that would be a different story.
If they were a competitor or there was clearly overlap, then we may be talking infringement and I would seek legal advice. If an SEO tool provider called their company or tool "Moz", we'd have legal recourse. If a PPC tool provider called their mascot "Moz" and made him/her a furry monster, things get a lot grayer, although we could certainly argue a deliberate intent to confuse customers and capitalize on our brand. Disclaimer: I am not a lawyer, etc.
If there were no infringement, but the new brand threatened to completely take over the SERP, then my first step would probably be a more robust paid search strategy. That might sound odd coming from an SEO guy, but it would be the easiest/fastest way (although possibly costly) to make sure we held onto a spot in that SERP for our prospects and customers.
After that, we'd have to do the hard work of establishing our brand presence and refocus our content strategy. If the competing brand was friendly, there might be room for partnership of some kind. You sometimes see a domain or brand that says "Are you looking for [X] instead?" because both sides realize there's confusion and it doesn't benefit anyone.
Knowledge Graph Eats Featured Snippets, Jumps +30%
Blog Post: November 27, 2017It's certainly possible. This trend was captured almost entirely on English queries and Google.com/US SERPs, though, so it's hard to tell. The timeline doesn't quite line up with what we know about the ccTLD changes (which we've been tracking pretty closely, for product reasons).
Yeah, I fear that people will look at this and say "Ok, let's stop doing Featured Snippets!", missing that those snippets are just part of a broader organic ecosystem. If you don't bother to understand Google's intent and why they're making these changes, you'll just keep rushing toward the shiny new thing and capturing fickle, short-term gains.
Good point -- Knowledge Panels have deferred placement on both desktop and mobile, compared to Featured Snippets, and especially on mobile may be pushed to the bottom. It may be that Google just found the Featured Snippets weren't serving this use case of broad, ambiguous queries well. The Knowledge Panels aren't that much better, in many cases, but they're consistent and a bit less prominent.
We see that with movies/music in the US, too. If there's a YouTube video for a popular phrase, even a product name, it may pull up a music video. Case in point: Rihanna's "California King Bed". Saw today that a search on "vacation" returns a Knowledge Panel and (at the bottom) related movies to National Lampoon's "Vacation".
I suspect it's a problem of ambiguity -- and Google has struggled with it for a while. If you type "travel", what are you looking for exactly? Granted, you probably didn't need travel defined, but when Google was trying to match a Featured Snippet to that search based on content that matched the word "travel", that was also probably a lousy experience. They've tried a lot of things, including In-depth Articles, and my suspicion is that nothing works as well as they'd like. They just don't know what people who type in these broad queries want, and so they throw everything at them to see what sticks. It's not a great solution, but I'm not entirely sure there is a great solution.
I do still believe that targeted Featured Snippets have real organic search value, but I think picking and choosing phrases with strong intent is a good bet. Many of these broad, "head" phrases are ambiguous. How do you target content to someone who just types "travel" or "toilet"?
It's tough to tease apart, because many of the terms that picked up these Knowledge Panels tend to be broad, "head" terms, which naturally have different characteristics. Google has struggled with these more ambiguous terms (in terms of intent) for a long time. If someone searches for "laptop", what's their intent? Are they researching? Buying? Do they know what a laptop is? It's a tough nut to crack, and I think our obsession with these terms as SEOs, just because they're often high-volume, is often counter-productive.
I definitely don't think this is cause for panic or even cause to stop pursuing Featured Snippets. I suspect that, as you said, these very broad, ambiguous terms, tend to have limited opportunity. We look at volume alone for head terms and rush after them, but in so many cases user intent is unclear and the cost to compete is very high. I suspect Google found that Featured Snippets were a poor fit to these situations and, perhaps, that engagement was low. I would, however, be very wary connecting it to RankBrain, per se, because I think that's a specific component of the algorithm that is separate from the sub-algorithms that generate Featured Snippets.
The long meta descriptions seem connected to the same engine that produces Featured Snippets, but the pattern isn't exactly clear. We often see long meta descriptions on queries that also generate Featured Snippets, even for the result that isn't in the snippet. They both seem related to Google's ability to process on-page text and feed it into the general knowledge funnel (for lack of a better term).
I've definitely seen list snippets created from things like header tags or even bolded paragraph headers. Google does not need an HTML list (<ul>, <ol>) structure. They're parsing that content from many different kinds of "plain" HTML. They can't rely on webmasters to provide a specific kind of structured data or markup, IMO, or they'd cut out too much potential content.
How to Do a Keyword-Driven Content Audit (with Keyword Explorer)
Blog Post: November 07, 2017I'm not sure there's a one-sized-fits-all answer, but if I were going to do a serious rewrite and update to a piece of content, I would typically create a new piece of content, launch/promote it normally, and then 301-redirect the old content (assuming they had very heavy topical overlap).
If I were just going to do some tweaks or testing on a piece of content getting solid traffic (like testing a new title), I would leave that on the old URL.
Nice! Yeah, once you get into pivot tables, you start finding uses for them everywhere. I really like the idea of better tying tying rankings to landing pages and viewing them from a content-centric perspective.
Could you share the domain with me privately ([email protected])?
Sorry, Luke -- let me check with the team.
NEW in Keyword Explorer: See Who Ranks & How Much with Keywords by Site
Blog Post: October 23, 2017FINALLY!! *breathes sigh of relief* *dies*
Many thanks to the team for months of work on the back-end of this. The Keyword Explorer features are a first step as we explore how to use this incredibly valuable data in other products and features.
What I love for now, though, is the workflow aspect. People are always asking "How do I know everything I rank for?" -- this may not be quite everything, but it's a ton, and Keyword Explorer makes it easy to push and pull that ton and move it directly into lists and campaigns for actionable insights.
Announcing 5 NEW Feature Upgrades to Moz Pro's Site Crawl, Including Pixel-Length Title Data
Blog Post: September 12, 2017About a week ago :) All current customers should have access.
The width is definitely font-dependent, but this is based on Google desktop SERPs, which are consistent (Roboto, if I recall).They switched font a year or so ago, if I recall correctly. We will do our best to update it as fonts/sizes change in the future. Mobile is a bit harder because there's no fixed container size -- the width of a mobile SERP can vary a bit with the device and screen size. Mobile display titles can also wrap to two lines.
Moz Local Report: Who's Winning Wealth Management?
Blog Post: June 21, 2017Unfortunately, what matters from a search standpoint is Google's interpretation, whether or not we agree with it, and Google is interpreting intent more and more often. They may have their reasons or they may have screwed it up, but either way it's the reality we're left with.
I admit I'm not an expert on GMB guidelines, but it appears that you're right -- they may be pushing policy limits on that one. Looking at their location pages, it mirrors their organic naming conventions, so it may be unintentional, but it certainly could be giving them a boost. Hopefully, someone with a bit more local SEO expertise can chime in.
New Site Crawl: Rebuilt to Find More Issues on More Pages, Faster Than Ever!
Blog Post: June 07, 2017Sorry, been on vacation, but thanks for the detailed feedback, Mark! We're reviewing all of it ASAP. Definitely realize we need to introduce some bulk ignore options quickly.
Site Crawl, Day 1: Where Do You Start?
Blog Post: June 08, 2017That's a bit complex -- if they're persistent (it's possible to just have a temporary outage) then yes, they're a big problem. They're blocking crawlers from seeing those pages. There are a lot of reasons for 500-series errors, but that's generally happening on the server level, so it can get into the specifics of your platform/OS.
Update: My children were, thankfully, unharmed by the NOINDEX tags.