On May 22nd, 2019, Google’s mobile SERP officially welcomed favicons. Many are touting the brand-building power of having favicons appear within your organic result. But is a picture really worth a thousand words? Who is helped most by the insertion of favicons onto the mobile SERP? Are there any losers? Does having a favicon make up for your site’s name having less prominence? Simply, are the new favicons on the mobile SERP good or bad for your page’s organic result?
The In Search SEO Podcast
The In Search SEO Podcast Community Question of the Week!
When sites take on black hat SEO tactics how do you stay competitive and adhere to Google’s guidelines at the same time?
Summary of Episode 31: The In Search SEO Podcast
Today we speak with Glasgow’s most prominent SEO expert, the straight-talkin’ Craig Campbell about the myths of black hat SEO:
- Can black hat SEO techniques be effective if done properly?
- Where are the lines between white hat and black hat SEO?
- Is black hat SEO’s bad reputation justified? Is it really “immoral?”
Plus, we look at a new level of customization on the Google SERP!
Are We Seeing the Start of Customizable SERPs? [02:26 – 14:10]
As spotted by Valentin Pletzer, Google was running a Local Pack test that let you choose which sub-area you wanted Local Pack results for.
That meant you had the ability to select a specific location from an array of areas that were all within a few square miles of each other!
While this may not sound earth-shattering there is a tremendous amount of significance to this because you can expect this sort of thing to be all over the SERP in the future… as Mordy recommended to Google!
A few months ago Mordy wrote a post showing where Google’s super awesome, super specific, multi-targeted intent showing doesn’t always work.
The idea, in a nutshell, is that when the various intents on the SERP are related, Google can’t go wrong with targeting as many intents as it feels is advantageous, but when the intents don’t relate… cry havoc!!!
When the intents that Google tends to target are unrelated there is a certain amount of ‘user disconnect’. For example, Mordy ran a query related to ‘Notebook’ and he had all sorts of results for the movie The Notebook, computer notebooks, and paper notebooks. The results were quite diverse and catered to multiple user profiles…. Except, when that happens, no one user has enough results on the SERP to really satisfy them and the SERP gives off the impression that Google is a bit all over the place.
Mordy, actually, had a recommendation in these cases…. That Google show large cards at the top of the SERP or as an overlay (or whatever other formats would work) that lets the user CHOOSE which intent they are interested in. In this case, a large visual card to let you choose The Notebook movie and see results only about that, and other cards to let you choose to buy a notebook, or buy a notebook computer, and so forth.
This way Google can target multiple intents and not offer diluted results. It’s kind of like the Disambiguation Box on steroids!
So this idea of Mordy’s is very similar to the Local Pack letting you choose your own area test.
And that’s big news!
While the formats for both are not exactly parallel the concept is the same: let the user choose their own search direction.
And this makes 100% sense. Why wouldn’t Google do this and go deeper into this “functionality”? What do you lose by letting the user choose their own path? Nothing!
In fact, Bing’s been doing this for a while with their version of Featured Snippets which offer multiple answers from multiple perspectives and where the user can choose what answer they like for themselves.
However, Mordy doesn’t think Google will go this way when it comes to their own Featured Snippets. They could if they wanted. The closest thing they have are the Multi-faceted Featured Snippets where you have one snippet atop a second snippet serving to answer a follow-up question (best way to explain it in the shortest amount of words).
Why wouldn’t Google do this with the Featured Snippet? Mordy thinks Google sees things like Featured Snippets as a way to build authority. We just spoke last week with Alli Berry about how sites can build authority. Google wants to build authority for itself too, very badly in fact. Having a top spot answer, like a Featured Snippet, is a way for them to come off as “Hey, we got this. We know it all. We have the answers for you.” Showing a diverse set of URLs and content within the Featured Snippet doesn’t work from an authority perspective as it’s more of a user empowerment model. And we’re not saying one is better than the other. We’re just saying why we think users choosing what gets onto the SERP will continue, but not vis-a-vis the Featured Snippet.
If Mordy is right, this would have huge implications on the world of SEO. We’re talking about hyper-personalized results where you can rank #1 for a keyword in one case and #100 in another for the same keyword!!! That changes a lot of things from how you approach your content to how to track your progress. It’s enough to make your head explode thinking about it.
The Truths, Myths, and Misconceptions of Black Hat SEO: A Conversation with Craig Campbell [14:10 – 59:32]
[This is a general summary of the interview and not a word for word transcript. You can listen to the podcast for the full interview.]
Mordy: Welcome to the In Search SEO Podcast interview. This is a first on the show as I am sitting, live and in the flesh, with the renowned Craig Campbell of… well, everywhere: Conferences, courses, webinars, and of course his SEO consulting services. Craig, do you sleep?
Craig: Sometimes. Not unless my baby will.
M: He’s four-months-old, correct?
M: So you don’t sleep. I know how you feel. So I have to ask, do you change diapers?
C: I’ve done one before and it was a #1. I haven’t done #2s.
M: I envy you. As a father of four, I envy you. You are my hero.
So we’re here to talk about black hat SEO but first a disclaimer. Here at Rank Ranger, we do not endorse any black hat SEO tactics. We are merely talking about it as it is part of the overall SEO conversation and we need to talk about it.
Just so we’re all on the same page, what is black hat SEO and how does it differ from white hat SEO?
C: I’m not a big fan of the term black hat SEO. People say I do black hat SEO because I do a lot of affiliate marketing and I push the boundaries more than I would a client’s website.
Obviously, you’ve got your PR and all the other stuff you do for outreach like building your website, PR, outreach, and link building, but for me who does affiliate marketing, they see my sneaky tricks as black hat. What I am not is some dodgy spammer who puts 20 websites on a shared hosting account and link everything together which will leave massive footprints so Google can slap me. That’s black hat SEO as far as I’m concerned. Very shoddy, automated, and crappy work.
I wouldn’t consider myself black hat SEO as I try to cover my footprints and I try to do things with a mixture of legitimacy but people automatically just assume that I’m doing spammy crap work. Black hat SEO is automated tools, getting bots, and all this weird stuff. It’s about doing things and seeing what gets penalties and what doesn’t. It’s about understanding how far you can push Google.
With my affiliate marketing website, I like to play a few games with Google. It’s just like life, it’s one big game. We all like to be Mr. Nice and do white hat but sometimes you would like a beer. It’s no different when you’re doing something that you know is naughty. Maybe you shouldn’t have that beer today, but that wouldn’t be as much fun.
M: So I assume that you think the terms black and white hat don’t do anyone any service.
C: And the reason is because there are so many white hat people out there who do black hat stuff but don’t admit it or they think because of who they are it’s not black hat.
It’s all a matter of opinion. What you call black hat may not be what I call black hat. Who cares what hat you’re wearing? If you’re making money and your sites are ranking well then call it whatever you like. Then you have these guys who are playing the PR card, putting on the white hat, being squeaky clean, and all that stuff, but it’s all a sales pitch. These guys aren’t actually doing white hat SEO.
M: And that’s really the point. It’s a conversation that you need to know. You need to know what both sides are. You need to be able to have the intellectual honesty to say, “Okay, this is what works for my site and this doesn’t.”
With that, there is a perception that if you “go black hat” it’s because you’re a disgusting cheater who wants to game the system.
M: Are there legitimate reasons why honest people should take on black hat SEO tactics?
C: 100%. When you’re operating in very competitive niches and your competition are doing these naughty things you have the choice to play the game or don’t play and sit behind those guys.
Everything in life is about competition and if your competition is doing something very well and is making lots of money, whether it’s spending a lot of money like buying links or whatever, you have to follow suit or you won’t be a challenge at the table. At the end of the day, ranking well for your queries is where the money is at.
There are a lot of businesses who won’t do it because of all the scaremongering going on online, that you will be penalized for this and that. If you get the metrics right, if you do a little bit of this and that, then the balance will be right that Google won’t come down on you.
When you pay a PR agency about 20 grand a month, the chances that money is all going to be squeaky clean is impossible. What’s more black hat is that agency putting 15 grand of that 20 grand budget and putting it in the back pocket of all the directors and spending 5 grand across content links, cost tools, and research. To me, that’s more black hat than me potentially paying a guy to do some outreach or pay a guy for links.
M: There are some really crappy SEO tactics. How do you separate the bad tactics from the ones that although they go against the Google guidelines can be a reasonable way to act with your site?
C: You’re right, they are some crappy ones out there like doing blog comments where it’s quite easy for them to ban you or ignore them. You can be wasting a lot of time doing that.
You have to basically feed Google what it wants and what it wants is content. Let’s be honest, you can do your site audit, site speed, or clickthrough rate, and they could elevate your positions slightly but to really get up there you need to buy links. For me, most people have to buy links whether that’s niche edits or the painful niche where they believe someone is actually outreaching to websites and going to mommy bloggers. But the real truth is people are paying mommy bloggers 50 quid and getting a 500-word spun article placed on with a link. I don’t see much wrong with that. Getting the spun article is probably not good. What you want is to get it indexed and do it a little better.
At the end of the day, we need links and we need content and good on-page SEO. And investigating isn’t that hard. How you get those links is where you prepare to go to bribe the guy or schmooze him. I’ve had guys take me for beers, I’ve had guys buying me steak. Everything to get a shout out or promote the tool.
M: How were those beers by the way?
C: Those beers were very nice.
M: I want to talk about the effectiveness of black hat SEO. You hear in the white hat community that the effectiveness of black hat doesn’t really work. There is a 2017 Search Engine Land article that stated, “PBNs usually provide little to no long-term value to the websites they are linking to.” However, you don’t think so. Why?
C: I’ve made a lot of money from PBNs. Look, there are PBNs and then there are PBNs. There are guys with PBNs on shared servers at a real low cost and I wouldn’t call that a PBN. As far as I’m concerned a PBN is a real website whether I use it for link generation, AdSense, or Amazon affiliate. That’s what my PBN consists of: real website, real traffic, real metrics, that have their own backlink profile, that’s not some shady domain name that’s expired that’s been fired up by the way back machine and slapped on to some crazy hosting account alongside 20 other websites.
So there’s a PBN and a real PBN and that’s the difference.
M: And that’s exactly why I wanted to do this interview because there are so many taboos about this and there really shouldn’t be to a certain extent.
Let me ask a strategic question. How do you keep the costs down if you’re doing this at mass scale?
C: Before we recorded, we were just talking about using a site like rev.com to transcribe this podcast and I can then turn it into a blog and that would only cost me $30. Now I have 5000 words of great quality content.
Scaling can be done in a million ways. Using sites like rev.com, or you can outsource work to the Philippines. There’s a lot of cheap labor out there. I won’t say to scale up your content by going to the Philippines because English isn’t their first language and it’s going to be crappy content. But certain parts of the process can be outsourced for $200-300 a month. Personally, I have two Armenian content writers and the cost of labor is a lot cheaper than the UK. I can have five staff in Armenia for the price of two or three staff in the UK. To work at scale you have to think of alternative opportunities.
M: I wouldn’t even call this a black hat topic as many major companies do outsourcing.
C: Yeah, I mean the price of living in Thailand is so cheap.
M: Yeah, and you’re helping them out for doing this.
C: Exactly. That’s how you got to see it.
M: This is a question I have to ask. Won’t Google penalize if you take on black hat SEO tactics?
C: No. If you avoid leaving massive footprints whether you’re building PBNs, you’re buying domain names, you’re hiding your WHOIS, mixing that stuff up with fake personas, you’re buying hosting accounts with prepaid credit cards, or whatever steps you do to avoid any kind of trace back to you will go a long way to helping you evade Google.
I guess they may catch me one day but so far they haven’t. I may not be big enough to be on their radar because I think there are other guys who are doing this on a more serious level.
M: Well after this podcast who knows?
C: Yeah, right. For me, the biggest way people make mistakes is cost. I’d rather pay for a shared hosting account than for ten individual hosting accounts as I’d be saving money. You have to speculate and spend the money in the right places. To do these types of tactics you have to pay for the right stuff which is where people make mistakes.
M: Let’s jump to SEO morality. If there is such a thing as SEO morality, where does black hat SEO fit in that? And I’m not talking about buying 100 links for $100, that’s just stupid. I mean actual tactics. Do you think that’s fair? Should there be a moral stigma when breaking Google’s guidelines?
C: I think it’s absurd. When you’re talking about morality I could tell you some of the tricks guys I know play and these guys have no morals whatsoever. I think the dirtier it gets the more chances of success you get. The fewer morals you have the more chances you have of winning. That’s just my honest opinion. If you want to be moral you should do white hat SEO.
M: Okay, so some people say that doing black hat SEO is fine because pragmatically that’s just what you have to do. If I understand correctly, what you’re saying is that Google’s guidelines are not a moral compass at all. It’s your site and you can do whatever you want.
C: Right. I have my own affiliate website. I’m not going to bow down to what Google says on what I should or shouldn’t do. I’m sure we have all heard of examples of where Google said one thing and they contradicted themselves on something they said previously.
You have guys out there who are A/B testing all the time, who leak that stuff out all the time, that’s just an outrageous lie.
Google doesn’t want us to game their system and will try to stop people here and there to stop the masses from gaming the system.
I’m going to make this clear that tagging your images probably doesn’t have much of an impact but Google will probably say it doesn’t because they don’t want everyone to tag their images if that was going to be the next big thing that will get people to rank well.
You have to think in Google’s shoes. Use your common sense. Would you use guys to try and throw people off the track? You need to take a step back and ask if that makes sense and if you should be testing as well. And speak to other people in the industry who will tell you what works and what doesn’t.
M: You’re right. People forget that Google’s guidelines aren’t for the whole web, it’s for their search engine, yet there are many other search engines out there. Google’s guidelines are for your site to show up in their results and your site is YOURS. How you choose to go about practices on your site is your business. Google’s guidelines and your site structure should be two different things. Remember that Google will want x,y, and z because it’s good for Google, but the question is whether that is good for your site.
C: Let me put this question to you. Say I make money as an Amazon affiliate doing some sketchy tactics. Does anyone think for a second that I only want to make money for one or two months by doing something dodgy to get myself penalized? Absolutely not. I’m in my thirties and I need money for my family and kids’ education so I’m not going to do anything that’s going to jeopardize my income. But what I will do is do more than the next guy who is killing that niche. That’s the game we have to play and that’s the thing I’m prepared to put on the line.
And this is for my personal website but a client may not want to play that game and that’s perfectly their choice. As far as I’m concerned I will push, push, and push until I win.
As an SEO who does black hat SEO my job is not to see how quickly I can get penalized. It’s how long can I make this money because I want to live a good and easy life.
M: Let me ask a controversial question then. Do you feel that Google is the one pushing for this? If Google were a little more dynamic in their understanding of what an SEO needs in order to survive, would it be a little less rigid and would it ease up on rigidity by creating an environment where black hat SEO wasn’t needed or as taboo as it is in the current construct?
In other words, is Google causing their own problems?
C: Yeah. They should wide-release patents or guidelines and tell people how it works if you don’t want people to game the system. At the end of the day, whether we play a game online on PlayStation or whatever, someone wants to game that thing to win. Whether it is getting the most coins or being the number 1 ranked on the game you’re playing. I play Fifa, and you can waste your time getting coins in the game or you can buy them off some hooky guy off the internet. It’s more fun to just buy the coins and have all the base players.
There’s always going to be a way to get around things. If I told you that you can’t have another beer tonight you will find a way to get another beer. Similarly, with Google, they tell us that you can’t do this and then you do it. I think Google is causing their own problems by saying you can’t do this. As a human being, we don’t like being told what to do so people will try to break the system.
M: Wow. I do want the audience to think for a second that the way things are, and no fault to Google themselves, is due to the way Google constructs things and due to the way human nature is. There shouldn’t be that stigma or taboo of doing x,y, and z in order to survive. There shouldn’t be any judgment about it. There is no reason to be aggressively argumentative with someone who disagrees with you on this.
C: Yeah. People have been aggressively in my face at conferences. There’s no need for it in the world. We can disagree with each other and if you do disagree let’s have the debate online. At least back up what you’re saying with stats and facts. That way I can learn from it and you can learn from it.
M: Right, as I said, Rank Ranger doesn’t agree with black hat SEO but it is a conversation that’s important to have.
C: Yeah. People have to learn, make their own decisions, and implement. Even if it’s a small bit of black hat strategy or nothing at all.
M: This brings me to my last question. Doesn’t everyone game the system? For example, on this very podcast we’re going to link to your website in our post. We don’t have to. I could just mention it, or I could leave it out entirely, but we are going to because it’s common courtesy. We’re setting up an exchange: I get a great guest, you get some exposure and a link or two. And that’s normal, everyone does it… but is it 100% legit?
C: Yes, Google’s guidelines will say that building any links is against their terms of service, but the number one ranking factor out there is link building. It’s a contradiction. You can’t tell me to drink water instead of beer because it’s bad for you and then start pouring beer down my throat. It’s insane.
As I say, do what you feel is right and take insights from what I say or whatever anyone else says, including Google, and do what you want with it. Try it out. And if you make some money you may come back to me and say, “You know what. I think I should have bent a few rules here and there.”
M: Where does “gray hat SEO” fit into all of this for you?
C: No one cares. Again, it’s up to personal opinion. If you want a “PC” answer, it’s somewhere between black hat and white hat where you’re doing a few hooky tricks. But that’s all bull as we’re always doing tricks and you can call it whatever you want. As long as I’m making money.
Optimize It or Disavow It
M: Assuming you need to do one or the other, would you stuff a page with keywords or write a really slick click-bait title? Which is the lesser of the two evils?
C: As far as I’m concerned click-through rate is a ranking factor so I would choose the click-bait title. Keyword stuffing ain’t happening. Click-bait titles work for the general public as most guys out there aren’t computer literate or web-savvy.
M: Thank you, Craig, for coming on to the podcast. That was beyond entertaining, interesting, and insightful.
C: Thank you. Been a pleasure.
SEO News [01:01:28 – 01:04:57]
Google Bug on AMP Pages: There was another Google bug. This time, when on an AMP page, the link to go to the site itself did not work.
Google Bug in Maps: Another bug seems to be resulting in users seeing a redirect notice when clicking on the site within the Local Panel.
Search Console Adds More Days of Data: Search Console’s overview report now shows 90 days worth of data by default, not 28.
New Google Search Bar Menu: After extensive testing, Google has officially changed the menu bar on the desktop SERP. The menu now contains heading names next to icons.
Fun SEO Send-Off Question [01:04:57 – 01:07:44]
If Google had their own breakfast cereal, what would it be called?
Sapir suggested Google Crunch as she loves Captain Crunch.
Mordy thought Alg-O’s would be a clever cereal name. Get it? Alg-O’s, like algorithms?
Tune in next Tuesday for a new episode of The In Search SEO Podcast.
On June 4th, 2019, Google released its second official broad core algorithm of 2019 (which was appropriately dubbed the June 2019 Core Update). At the same time, and further complicating the ranking picture, Google made a significant change to the top of the SERP by increasing domain diversity. Combined, the two “changes” created quite the substantial “ranking event.”
With that, let’s have a look to see how the June 2019 Core Update impacted rankings and how to look for signs of top-of-the-SERP domain diversity within your rankings!
The Impact of the June 2019 Core Update on Rankings (Niche Analysis)
There are a lot of ways to break down the impact of any Google update. You could look at all sorts of sites, compile a list of winners and losers, and so forth. For our purposes, I’m going to go into a per niche breakdown, look at the most impacted industries, and see if there is anything to glean from the data.
Of course, and as is prudent to mention, no one data analysis can fully capture the extent, breadth, or even patterns of any given algorithm. It’s important to remember that any data we see is really one small part of a very large picture that is impossible to truly define. That said, here’s the data I collected on five niche industries:
There are actually two groups of niches here, the Travel and Retail niche on the one side and the Health, Finance, and ******** niches on the other. The reason for that is quite clear, the data clearly shows that the latter three niches were more heavily impacted by the June 2019 Core Update.
Specifically, once you move past the first ranking position there is a vivid demarcation between the level of rank fluctuations seen within each niche. For example, neither the Travel nor Retail niche showed more than a 20% rank fluctuation increase during the update. That stands in sharp contradistinction to the Health, Finance, and ******** niches who all saw rank fluctuation increases of 25% or more during the update at the 2nd position on the SERP (which is pretty substantial being that we’re talking about the 2nd position, not the 10th).
As you work your way down the SERP (i.e., when looking at the 3rd ranking position and then the top three, five, and 10 result ranges) the disparity between the niches continues. While, for example, the Travel niche showed a 54% increase in rank fluctuations in the top 10 results overall (significantly more than the Retail niche’s 24%), it pales in comparison to the 84% instability increase within the Health niche.
Comparing the March 2019 Core Update to the June 2019 Core Update
I want to give the data here a bit of context. To do that, let’s see how the June 2019 Core Update stacks up against the March 2019 Core Update. Looking only at the length of the roll-out and the fluctuation levels, the June update was one day shorter but did see slightly higher rank fluctuation levels (see below).
Rank Fluctuations June 2019 Core Update:
Rank Fluctuations March 2019 Core Update:
However, looking at the fluctuations within the niches shows the June 2019 Core Update to be a more significant ranking event. That said, it is worth noting that just because Google confirms an update, does not mean it is any more potent than an unconfirmed update. With that caveat, the increased reach of the June update was not across the board. Rather, the Travel niche seems to have been impacted to the same relative extent during both the March and June core updates.
However, when you look at the Health niche the June Core Update was far more impactful than what we saw in March. This is most notably vivid at the top of the SERP as the fluctuations recorded during the June 2019 Core Update at the 1st, 2nd, and 3rd ranking positions were quite significant while those instability increases seen during the March update were what we typically see even during more minor updates.
It could then be argued that the June 2019 Core Update had a far greater tilt towards YMYL (Your Money Your Life) sites than the March update did. This does align with some of the renewed chatter about the algorithm vis-a-vis Google’s quality rater guidelines. That said, I would not go so far as to say that Google was specifically targeting YMYL sites.
Increased Site Diversity Alongside the June 2019 Core Update
Of all the data shown above, the most intriguing, to me at least, is the Average Position Change metric. Google is always moving sites around. The question is to what extent? Does Google, when shifting the rankings around, move the average page up/down the SERP by one position or 100? Whatever that number is, the obvious expectation is that it increases during an update. And without fail, that is exactly what happens. For example, during the Medic Update, sites within the Health niche went from an average position change of 2.34 before the update to 4.03 during the update. Here, however, the average position change within the Health niche increased by less than a full position (.86 of a position to be exact).
This was a pattern across the board. During the Medic Update, the ******** niche saw the average number of positions sites were moving jump by 1.31 positions. Here, during the June 2019 Core Update, that number was just .34 positions.
Simply put, the number of positions that sites fluctuated during the update compared to the number of positions they generally fluctuate was not that different (all things considered).
That does not mean that there were not huge gains and huge losses with pages swinging to extreme lengths up/down the SERP. That clearly transpired. Rather, it means that there were also a voluminous number of sites that moved just slightly up/down the SERP.
Now, understanding what might be behind these numbers is a bit difficult as concurrent to the June 2019 Core Update was an adjustment to site diversity at the top of the SERP. On June 6th, Google announced that they would be restricting the same domain from appearing no more than twice within the top search results (although, Google did indicate that there will be instances where this may not hold true). Google’s new domain diversity hit the SERP at the same time as the June 2019 Core Update making it hard to see the full extent of either. So when we see the June update not showing a high position change average, it is possible that the data was significantly impacted by the domain diversity adjustment.
How to Find Occurrences of Domain Diversity within Your Keyword Dataset
Being that I try to be a good SEO citizen, I wanted to briefly offer some tips for finding possible instances of your pages being demoted due to Google’s new domain diversity design.
Let me say right out the gate, there is more than one way to skin a cat. I’m merely using a few ranking reports from inside Rank Ranger to highlight some signs that might point to a page having been impacted by Google’s renewed top-of-the-SERP diversity concerns.
Step 1: Know Where SERP Diversity Matters
I know this is simple, but it’s easy to forget that the new call for domain diversity is “at the top of the SERP.” Now, does that mean positions 1-5, 1-7, or even 1-10? That has not been laid out for us exactly. What is certain is that anything off page one is not part of this equation. Don’t forget this when sifting through your rank data. Whether you see large shifts or on what would be page three or movement of a position or two, this is not part of the new diversity paradigm. Again, this is obvious but easy to forget.
Large or small rank changes at positions that are beyond page one of the SERP cannot be explained by Google’s change to SERP diversity
Step 2: Look for Gaps in Your Landing Page Rankings
Let’s assume you’re only tracking the top 100 results, or even just the top 10 results. If you are, then looking for a gap in your landing page rankings for a specific keyword is something that could point to your page being removed from the top of the SERP for the sake of diversity.
A landing page analysis that shows gaps in a page’s ranking for a given keyword during the early part of June could point to Google’s change to domain diversity on the SERP
Obviously, there could be many reasons as to why you lost rank for the page. However, seeing a consistent gap in ranking when prior to that you ranked well consistently is a red flag.
Step 3: Check Your Landing Page Fluctuations
Once you have a keyword that shows your landing page no longer ranks or not nearly as well (see step 2), you can now dive into your overall landing page performance for the keyword. In other words, now that you have isolated a keyword that may have been impacted by the diversity “update,” have a look at how your landing pages have been trending for that keyword.
The Landing Page Monitor showing a page being removed from top rankings for a specific keyword
If you see that you ranked multiple pages at the top of the SERP until early June with a sudden ranking loss to all but one or two pages thereafter… there’s a decent chance domain diversity has come to haunt you.
Now, Google has said you can still have two pages rank at the top of the SERP for a given keyword. That, however, is at maximum. Meaning, you can have a maximum of two pages at the top of the SERP for a keyword (as a general rule). This does not preclude Google from limiting a domain from appearing but once amid the top results on the SERP.
A sudden and sharp drop off in a site’s second-ranking landing page coincides with Google’s change to SERP diversity
Can Google Better Profile Your Site and the Keywords Relevant to It?
Having looked at my fair share of rankings for numerous sites and after speaking with Rank Ranger founder, Shay Harel, one theory that presents itself as a result of the June update is that Google is significantly better at knowing which keywords relate to your core profile. Meaning, it could be the case that Google has a far better grasp of knowing which keywords are actually highly relevant to your site and which slightly veer away from your site’s core profile. In the case where the keyword hits on your site’s profile you got a slight boost, in cases where they don’t you saw a ranking demotion.
I happened to see that renowned SEO expert Gianluca Fiorelli indicated that he saw “…for core query sets, rankings are improving.” Obviously, and as Gianluca noted, that is the goal of a core update. That said, there does seem to be a noticeable advancement in Google’s ability to improve a site’s rankings for those keywords that align to its core profile. It should be noted that these core updates seem to come with an increased ability to profile sites/pages.
Indeed, part of me wonders if the reason why the average number of positions sites changed was lower with this update is that Google has become far more nuanced in how it understands keywords in relation to a site’s/page’s “profile.” – slightly adjusting rankings accordingly.
It’s hard to know 100% with these things, but there does seem to be a pattern of Google having a better grasp of your site’s “identity.” In fact, some in the industry have speculated that Google now views a site as an entity.
An increased ability to align ranking boosts and demotions to a profile certainly supports this theory!
The In Search SEO Podcast
The In Search SEO Podcast Community Question of the Week!
Help the SEO community! Share your top tips when competing with spammy listings in the Local Pack or when dealing with fake reviews!
Summary of Episode 32: The In Search SEO Podcast
This week we have none other than the grandmaster of local SEO, Professor Maps himself, Mike Blumenthal to share his local SEO wisdom with us:
- Do local marketers need to hop on the paid search bandwagon?
- What is the real story with Google My Business monetization?
- The role of reviews in local ranking
Plus, we take a look at how Google is further developing its Topic Layer.
Google Adds More Layers of Subcategorization to the SERP [02:24 – 17:53]
If you recall, a few episodes ago, Mordy pointed out that Google was using the mobile SERP renovation as the first step to ratcheting it up the number of changes it would making to the SERP.
If you remember we talked about all of the Google bugs (which are still popping up each week) and we talked about how Mordy thought that these bugs would indicate something big in the works. We also talked about how the redesign of the mobile SERP (you know, the new ad label that has no color and will smaller and colorless URLs with favicons, etc.) was an external representation of this makeover. An external makeover to symbolize an inner makeover.
Mordy actually predicted in the latest version of the SERP News that we would see Google pick up the pace with the number of changes it makes to the SERP. If you have been following these things you’ll know that 2019 has seen far fewer changes than in previous years.
Mordy thinks there’s a pattern that can be seen here. A pattern of the changes being made and a pattern of when they are coming. If you remember, Google updated the look and gave a slight makeover to the desktop SERP quite recently. At the top of the page, you’ll see the headers/buttons on the menu now have both an icon and a name. For example, for images, it says “Images” and has an icon that represents images.
What happened after the aesthetic changes to the mobile and desktop SERPs was an opening of the floodgates. All of a sudden there were tons of changes to the SERP. Tests to all sorts of SERP features.
But that’s not the whole story because it wasn’t just the number of changes… but what has changed.
What’s changed is that there is an additional layer of depth that has been added on to many of the features you see on the SERP.
Let us show you.
If this were 2018 and you Googled store hours at Macy’s Herald Square you would get an Answer Box that showed the store hours for that store. Now, in 2019, not only do you get the hours, but you also get the business’ Google rating (mobile only).
Here’s another case. When you search for a general product you might see a refinement box. For example, if you do a search for bunk beds you might see dropdown tabs that will refine your search. In Mordy’s case, he saw a tab that allowed him to “refine by size.” Expanding the tab gave him a carousel of bed size options: twin, queen, king, etc. Another expandable tab Mordy saw for the query was “by style” that allowed him to get results on a specific type of bunk bed.
In other words, an extra layer of topical depth/broadness has been added to product queries.
Same thing with the store hours search. We have the business, its hours, and now we have another layer, its reviews.
Mordy thinks Google has leaped forward with its topic layer with its ability to add layer upon layer of topical breakdown goodness, of sub-categorization to an entity, to a topic, to a product, etc.
But that’s not all. If you did a search for bed sheets, it used to be you would get a carousel of bed sheets. Now Google is testing another tab where not only can you look at individual bed sheets but you can get a carousel of comparison sites. Mordy thinks this has to do with Google wanting to avoid the critique that comes with just showing its own products. This does get Google out of that issue a little bit but it also reflects a better understanding and breakdown of the “topic” or “product” into additional layers.
Here’s one last one for you. You may know that Google Posts are shown in order of post **** and only lasts for 7 days (generally). Well, now Google is at times showing a carousel of old posts from the business that relate to the user’s query. Back to the Macy’s example, if you searched for Macy’s perfume you might see old posts related to perfume in the Knowledge Panel. Again, another subcategorization, another parsing of a topic.
There were a bunch of other tests that you could tie into this. For the sake of time, we’ll leave them out for now, but in a few weeks, we will release the next edition of the SERP News, so full coverage there!
Again, all of these changes point to a pattern where external overhauls or changes to the SERP signify larger changes often enough. That larger change, or part of it at least, is an extra layer of topical depth on the SERP. With a more prolific use of the Topic Layer comes a variety of implications from increased product competition to a better contextual understanding of a business and so forth.
On the Health of Local Search: A Conversation with Mike Blumenthal [17:53 – 53:51]
[This is a general summary of the interview and not a word for word transcript. You can listen to the podcast for the full interview.]
Mordy Oberstein: Today we have the great sage of Local SEO, the co-founder of Local University, the co-founder of online review engine Gather Up, and the provider of local insights on the Understanding Google My Business & Local Search. He is, of course, Professor Maps himself, Mike Blumenthal!
So am I wrong or are you a big outdoorsman?
Mike Blumenthal: I used to be a big outdoorsman. I was a guide for The National Outdoor Leadership School for a number of years in East Africa, Alaska, and Wyoming in the 70s and 80s. Since then I slowed down on that front. I do ride my bike a lot.
MO: Let’s start today with Local SEO: spam, reviews, and all sorts of fun things.
A lifetime ago, I used to work for a property management company in NYC for about 8-9 years so I’m familiar with the big NYC plumbing companies, the most reputable locksmiths, etc. However, when I do a search for ‘plumber nyc’ I don’t see any of the people I would expect to see. I get a whole bunch of results for “24-hour plumber” this and “24-hour plumber” that. Is the algorithm busted? Oddly, the Local Service Ads are more aligned to what I would expect to see.
MB: Well, for one, your search is broken. Most searchers tend to search on mobile, search on businesses nearby, and typically don’t use geo-modifiers in their search. If you do use geo-modifiers, and you’re within the proximity, Google does a better job. Sure, there is a bit of spam, especially in plumbing, where Google, over the past year or two, hasn’t done a great job of eliminating. So there is brokenness in Google handling local listings.
It’s a little bit of both. Google is interested in giving relevant search results based on location. But in service area businesses (SABs), there is a lot of spam and cheating as the economic rewards are so high. I think one of the ways Google responded to that is their Local Service Ads that look like pack results and need a higher level of verification to get in. They have been showing more of these in the SERP.
I wrote an article just the other day of Google testing a horizontal scrolling Local Pack. When you looked above it the top of the screen had two Local Service ads and below were three Adwords ads, then an organic result, and then the Local Pack. So one way Google is dealing with this is by pushing the Local Pack and providing other results above it in that vertical and in that market.
For plumbers, it’s a very localized search. There are industries where people would want to look more broadly like, for example, car dealerships in New York City. There aren’t many and Google will need to expand its radius to answer your query.
MO: So with the Local Pack test you saw. For Google, is that an admission of failure?
MB: I had a couple of theories about it. One is a response to the actions the government plans to take that Yelp has been calling for the past 4-5 years. Certainly, Local Service Ads is an admission that SABs tend to be spammy. There have been doing a lot of work over the past six months asking SABs to reverify, and when they do they need to submit real signage of the location and to prove that they exist. It is a slow process as it’s a combination of physical verification plus AI and machine learning.
I’m not looking at culpability on Google. This is what you deal with today at every vertical and this is the decision a business has to make vis-a-vis these results, regardless of culpability and guilt. Google has succeeded in winning market share and succeeded in putting most of the other local sites out of business and this is your choice as a business or a consumer. To some extent, the reality doesn’t serve the business. Yes, it may suck, but in the meantime, you need to make a living so how are you going to do it?
MO: You spoke about Local Service ads. If you’re a legitimate business and because of all the spam do you feel you need to go the root of paying for a Local Service ad?
MB: I think it depends on the market, the vertical, and the location. We’ve seen AdWords in mobile creep up from one or two percent of the clicks to six or seven percent and maybe higher in this marketplace. And yet organic still has its part in that world. So even if ads took 20% of the clicks it’s still important to work on your organic local listings as 80% of those clicks are going someplace.
So the answer is yes. I would take a mixed approach and I would evaluate each. I would put in tracking to make sure the expenditures in one place vs the other is paying off. I see a mixed approach to search in three things: AdWords, entity optimization/knowledge graph optimization, and organic optimization. All of these provide value to a business. It’s just the issue of the mix and the return on investment.
MO: I want to jump to the topic of Google My Business optimization. For those not familiar, Google put out a survey that hinted to the search engine monetizing Google My Business at some point down the road. One thing that came up in that survey is that Google may charge to verify a review. To see Google saying they can do this but will charge for it, does that make your blood boil a bit?
MB: So I saw that and I immediately discounted it as it sounded like something written by a summer intern. The fact it had so many questions and they could’ve used their own survey tool made it seem unlikely that it reflected a higher level of thinking.
Issue two is that the reality is that Google is a corporation who is beholden to their stockholders to increase income. The other reality is that in local Google had a long practice of offering a freemium model where they offered many things for free and cheap which they’re not doing out of the goodness of their heart. They want the user data which they see as gold.
I don’t need a survey to figure out what Google plans to do. One reality is that if an expensive investment won’t generate $1 billion of income than they won’t do it. They need big money to change their income projections. Secondly, they see value in data and will continue to do things that will increase their data. For example, reviews. Google is now using review content to answer consumer’s questions about the business. That’s an incredible use of artificial intelligence and machine learning to solve a user’s problem and keep them on Google without having to go someplace else.
In that context, over the last four years, we have seen Google unleash the power of their local search engine to garner reviews. So when I do an analysis of a restaurant chain with over 200 locations I’ll look across their last four years of reviews. So on Yelp, they’re getting the same number of reviews per location as they did four years ago, maybe one review per location. Whereas on Google, they zoomed past Yelp where they’re generating 10, 20, or even 30 times the number of reviews per location on a monthly basis. Google sees huge value in that data.
There was a patent released in 2017 which indicates that Google is parsing these reviews to understand both entity details: what they do as well as how well they do it. They use reviews from a data point of view so they need massive quantities of data to effectively deliver on that promise of answering the consumer’s questions about the business.
MO: So why not monetize Google My Business?
MB: They are doing it. If you’ve looked at the history of Google local through 2015, they have announced they would do it through ads. They have increasingly localized their ad tool so that you can by now, for example, promote pins on the map, you have ads in the pack results, and you have Local Service Ads. These are all new formats in the last year or so.
My answer is they’re already doing it with great success and great monetary gain. Do they need penny ways to make money to piss people off? Probably not when they’ve got big ways to get their hands into your pocket.
MO: I was talking to Sergey Alakov a few months ago and we were talking about how local is the natural segue to monetize voice search. Does increasing the monetization of local segue into solving their voice search problems?
MB: I think Google was ahead of the curve in terms of implementation of the knowledge graph and of understanding businesses characteristics. In a broad sense, the knowledge graph is a better way to answer questions than a link graph. So the question is if a monetized version of the Knowledge Graph, where they’ve done extra ******* as with the Local Service Ads, is a better way to guarantee the relevance of those results. I think in some verticals, like plumbers, that Google has a legal and moral obligation to vet these businesses at a higher rate than they have been doing in the past and Local Service Ads are the best way to do it. It makes sense they would deliver those one-or-two answers that voice supports via more knowledge about that business with that knowledge likely coming from a paid ad because it includes serious vetting.
MO: Yeah. I think it was a month or so ago, there was some speculation that Google was using Local Service Ads as a way of providing voice search answers for local queries. From my perspective, that’s great. It’s much better than the average local result.
MB: Right. Google is concerned with delivering a good experience and a good answer and they got to where they are historically by doing that. So if they believe they’ll be giving a better answer in some verticals I think they’ll make that choice.
MO: Let me jump back to the Q&A Feature which we spoke of a few minutes ago. I find this feature to be very undervalued and underutilized. Is there potential for immense amounts of spam? In other words, why can’t one business ask some really nasty questions against its competitor which will show them in a bad light?
MB: Let’s first take a step back and look at the feature broadly. Google introduced it from August to December 2017 as a way to increase their long-tail understanding of a business. If you look at the way it’s shown on mobile it presents the business almost like a website with a tabbed interface. The Google Q&A serves to supplement that information much like an FAQ does on the business website. Google implemented it as a way to create timely answers to very specific questions that consumers might have. And then when a question is asked they would send it to a Local Guide, which they have 50-60 million of, to get a quick answer.
Now, this opens up possibilities on multiple fronts. On a positive side, it gives the business the opportunity to preload its own questions and answers into the Q&A. Answering potential consumer questions will save the consumer the phone call and save them the trouble of looking for the answer.
Initially, when I did my research, the quality of answers I saw was that 75% of them were selling opportunities and 23% were initially reputation related, inappropriate questions that violated terms of service. 11% were reputation related while the others were irrelevant questions.
Since then I have noticed that many of these bogus questions have been taken down by Google and some still get leaked through. The business can report it and it will be taken down so it behooves the business to monitor their business profile because it’s the most viewed information about your business in the world, even more than your website.
So yes, it’s open to abuse as is any crowdsourced platform. As Google being machine learning driven, it tends to look at 90% accuracy as being accurate enough. Which is annoying as it means about 10% is inaccurate, but a business needs to understand that this is how Google views the world. If it’s largely relevant, that’s good enough. As a business owner, I want precision. I want it to be accurate. This pushes the workload back down to the business in order to ensure it is accurate.
So yes, it’s open to abuse, but I think any business paying half attention to this problem would be monitoring their Google profile regularly and doing something about it.
MO: I want to preface the next question by saying that when it comes to YMYL issues on the SERP, particularly medical issues, Google’s Medical Knowledge Panel has taken a lot of care and effort (with help from the Mayo Clinic) to make sure the information there is accurate.
But in dealing with the Q&A feature for a health-related business, Google is dealing with content created by users and the business itself. Does the Q&A feature work for a medical/financial business where there is room for “irresponsible” information to be put out?
MB: I think a bigger tragedy that I have seen is when a consumer in desperate need asks for help for serious depression or suicidal thoughts. It’s conservative thinking that this is going into a real-time answering system when in reality it goes into a tumbler where maybe the answer comes, maybe it doesn’t. Maybe the answer is accurate, or maybe it isn’t. I think Google hasn’t done a good job of educating their consumers about their feature. I’ve seen some examples of tragic consequences and they’re very sad.
This is the problem with doing things at scale with machine learning with user-generated content. We’re seeing these problems now that our society’s allowed us to be that modus operandi.
I think Google failed in two areas: One, they haven’t educated consumers on its features, and two, they haven’t done a good enough job of building reporting so that businesses can immediately be notified about new questions. Recently, in January, they upgraded their API to support Q&A reporting which is good enough for small businesses, but not good enough for large businesses with multiple locations.
This is typical Google. They bring out a feature, they throw it against the wall, they see if they can get 85-90% reasonable answers, they think they can train their AI to keep the bad down, and then they invest in it slowly over time. They build early and reiterate often. In the real world of local, that approach to relevance can be very annoying.
MO: It is interesting as it would seem that Google could’ve locked off this feature from certain business types.
MB: They do, it’s less visible on schools and such. So they could but they don’t. Google views success differently than us.
MO: I want to end off our serious questions with one last thing about reviews. In one Google help document the search engine says, “Google review count and score are factored into local search ranking: more reviews and positive ratings will probably improve a business’s local ranking.” This seems to imply that ranking is impacted just by the number of reviews and the rating of these reviews. But what about the content of the reviews themselves? Do you think Google in some way “indexes” it? The idea that they can verify a review seems to also imply that they can analyze its language. Do you think that the very content of the review will or does come into play in some way, shape, or form?
MB: I think the content is probably perceived by Google as the most significant part of the review. They published a patent in fall 2017. I wrote up an article about the patent and my understanding of it on GatherUp in February. In the patent they note the following: user reviews may be gathered from one or more of a blog or social network postings, emails, articles written for websites or from printed publications such as magazines or newspapers, or postings made to a user review section of an online vendor or marketplace.
So point one is that they look everywhere to review content. Point two is they noted in this patent that this is about understanding entity attributes. In other words, they are looking for a business’s entity attributes, they are looking for reviews to understand what the business does and how well it does it. They looked at positive mentions of what they did in aggregate as increasing the likelihood of that business to show that attribute.
Google thinks of rank in terms of prominence and relevance. So reviews have some say in prominence but not as much as people think. And relevance, i.e., the content of the review, in terms of expanding the reach of a local business, plays a huge role in reviews. in some ways, I think that that’s the most important part of reviews, by using them to understand more about the business.
Here’s an Interesting tie-in between Google reviews, Google Q&A, and Local Guides. Firstly, with local guides, they incentivized local guides to create longer reviews. Google gets that more content helps with understanding. Secondly, there is a new feature in Google Q&A in the US when a consumer starts typing a question Google looks at the word in the query and starts servicing reviews that match that query. They extract those nouns and verbs and highlight in the review the ones they think answer that question.
This demonstrates what the patent is talking about when Google looks at the content of the reviews to understand the attributes of the business. And now they’ve taken the next step of not only understanding the attributes of the business but to answer unpredicted questions about the attributes of the business.
Optimize It or Disavow It
MO: If you had the terrible choice between uploading a pretty bland, boring, and otherwise irrelevant image to your local listing or creating pretty bland, boring, and otherwise irrelevant Google Posts and Q&A questions and responses, which would be the lesser of the two evils?
MB: The way I see it the business should get their bicycles and go on a bike ride because bad content and bad images is a waste of everyone’s time particularly the consumer. And I can see marketers thinking that somehow more images are better even if they’re terrible.
MO: Wow. You’re the first person on our podcast to give such an answer. Which is why you’re so cool.
Thank you so much, Mike, for coming on. I really do appreciate it. It was a lot of fun.
MB: Thank you for having me and I’ll be glad to come back anytime.
SEO News [57:08 – 01:03:31]
Google Stealing Lyrics From Genius?: A bit of controversy as Google was accused of scraping lyrics from the site genius.com. Google denies the accusation, but it has taken the SEO world by storm.
This is interesting as Google has said that they pay for the lyrics, but what about other content they display on the SERP? Mordy wondered that if you wrote a recipe and Google scrapes it and puts it on the SERP, shouldn’t Google pay you? If you’re going to pay for one, pay for both!
WSJ Reports on Fake Google Map Listings: It was a rough week for Google. Days after the “lyric” incident, the Wall Street Journal ran a report highlighting how fake Google Map listings really hurt people.
This whole issue of fake Google map listings is old news so why is it being discussed now? According to Mordy, the culture is changing where people are asking if Google should be regulated. Whether you agree or disagree, the reason these articles are coming out is that it fits into the new cultural dynamic of Google being too big for our own good.
3D Ads and YouTube Live Ads: Google is bringing 3D functionality to Google Ads. A new ad format lets you place a rotating object in video format within an ad.
Also, Google is releasing a special Google Ad format for YouTube live broadcasts.
New ‘People Also Considered’ Ads: A new Google Ads test has a carousel of smaller ads entitled ‘People Also Considered’ appearing under one large ad.
The title ‘People Also Considered’, has been driving Mordy a little crazy. It seems indicative that the first advertisement isn’t your best option because people ‘considered’ other ads. It gives the user a feeling that before they make a buyer’s decision they need to click on all the other possible options.
Fun SEO Send-Off Question [01:03:31 – 01:07:54]
If Google had a theme song, what would it be?
Sapir thought that the Requiem for a Dream theme song would be a great fit as Google scares her sometimes and we’re all addicted to it.
As for Mordy, since Google has been under fire with this whole lyrics snafu their theme song should be Hit Me With Your Best Shot by the great Pat Benatar. It’s sort of a message, like c’mon SEO industry… you think you got something on us… hit me with your best shot.
Tune in next Tuesday for a new episode of The In Search SEO Podcast.
It’s always nice when theory meets reality. Not too long before I undertook the research for this study I wrote an article about why I thought Featured Snippets would be getting shorter. To be honest, I was not expecting my search marketing prophecy to come true so soon. However, when I noticed that Google was throwing content barely a few hours old into Featured Snippets it seemed the content within them was a bit on the thin side. Thus, I took to analyzing 150 Featured Snippets to see if my anecdotal observation was correct.
Long story short, Featured Snippets are shorter… sort of.
On the Average Length of Featured Snippets: Then & Now
As I mentioned, I started to get this sneaking suspicion that Google was shortening the length of the content found in your average Featured Snippet. So I did what any logical person would do: I took 150 Featured Snippets that appeared on the SERP between 2016 and the end of September 2018 (aka the end of Q3 2018) and compared them to the Featured Snippets that currently appear on the SERP.
In other words, I looked at actual Featured Snippets that were shown on the SERP between the start of 2016 and September 30th, 2018 and counted up the number of characters within each snippet. I then brought up the current Featured Snippets for the same 150 keywords and counted up the number of characters. Naturally, I then proceeded to compare the number of characters within the Featured Snippets of yesteryear to the current incarnations seen on the Google SERP.
It should be noted, that I only compared Featured Snippets of the same format. That is, if a Featured Snippet was presented in paragraph form in say 2017 but currently appears as a list Featured Snippet, I did not use it for this study. The reason for this is obvious, Google tends to use different character allotments for each type of SERP feature.
Lastly, I analyzed paragraph and list Featured Snippet formats only. I did not look at any tabular Featured Snippets as it was generally too cumbersome to count the characters within them.
That said, would you like to see what I found?
How Long Are Featured Snippets These Days?
I mentioned that Featured Snippets contain fewer characters than they used to, but that is not across the board. In fact, the length of a Featured Snippet in 2019 is actually longer than it was prior to Q3 of 2018. (We’ll get to how I can state that Featured Snippets are shorter these days in just a bit.) Looking at all 150 Featured Snippets I analyzed the length of your average Featured Snippet is seven characters longer than what it was just a few short months ago.
On average, and regardless of format, Featured Snippets now contain seven more characters than previously
However, to get an accurate understanding of what is going on in regards to the length of Featured Snippets we need to breakdown the SERP feature by its various formats. For our purposes that means analyzing the length of both list and paragraph Featured Snippet formats separately.
Check out our guide to winning SERP features.
The Length of Featured Snippets in List Format
Featured Snippets that contain a list (either as bullets or a numbered list) are about 9% longer in 2019 than they were prior to the end of Q3 in 2018. Specifically, the average list Featured Snippet now contains 315 characters whereas the format used to consist of 289 characters.
Accentuating the overall trend, list Featured Snippets are a now a full 26 characters longer
That is, I pulled out 75 “list” Featured Snippets from the overall dataset (that’s exactly half of the total number of Featured Snippets analyzed) and found that the format is 26 characters longer than it used to be. That aligns to the overall dataset which has snippets being seven characters longer than they once were.
Now, Google adding more content to list Featured Snippets does not equate to them being less efficient with their wording. In fact, one of the common themes when comparing older list Featured Snippets with their more modern counterparts was the removal of content bloating. Google has gotten better at stripping these Featured Snippets of unneeded and unrelated content despite there being more characters.
Here’s a Featured Snippet for the keyword top online shopping sites from January 2018:
There is no reason I need to know that the author’s friends order clothing from Urban Original. I really don’t care. Hence the 2019 version of this Featured Snippet has no such content:
Aside from a short description of Amazon (which is not entirely irrelevant), all we get is a nice list of online stores. Notice, the stores listed in 2019 are far less obscure than what was shown previously. Also, the title/header leading off the list in 2019 is far more succinct than what was shown in early 2018. Which is interesting because the 2019 Featured Snippet is 26 characters longer than the older version.
By the way, it was my experience that the same improvements apply when the current Featured Snippet is shorter than what it once was. I found the 2019 version of the Featured Snippet showing for the keyword how to bake a cake to be 38 characters shorter than what it was in April 2018 despite Google using the same URL in both!
Despite the same URL showing both inside the April 2018 and 2019 version of the Featured Snippet, the more recent incarnation is 38 characters shorter
What happened to make the current snippet shorter? The header leading the list. If you’ll look above, Google did away with the inefficient and a bit bizarre heading of Method 1 EditMaking Vanilla Cake for the far more comprehensible Steps!
[Note, Google also did away with the ‘More items’ button, thereby indicating that the list shown is the complete list and that there is less of a need for a click.]
It’s pretty clear that while Google has increased the character count within list Featured Snippets it has done so in spite of presenting more efficient content within them.
Why Are List -Type Featured Snippets Longer?
There’s a general trend towards Google offering a more complete information experience on the SERP. This takes the form of both more types of “Google” content as well as more complete answers within Google’s SERP features. For list versions of Featured Snippets, this means more content. (In a moment I’ll show you why this does not apply to all Featured Snippet formats). If Google wants a list Featured Snippet to serve as a comprehensive answer it may mean adding more content. (Interestingly, the average number of bullets/numbers within a given list Featured Snippet has not changed and stands at 6.6 both now and prior to the end of 2018’s third quarter.)
When dealing with a list, the more complete that list is (qualitatively), the more the headers are targeted, the more the list items are qualified, the better it can serve as a Direct Answer would. Which is not the case when dealing with paragraph Featured Snippets.
Paragraph Featured Snippets Are Now Shorter
Now for the fun part. Featured Snippets of paragraph form are roughly 5% shorter than they used to be. The average paragraph Featured Snippet now contains 269 characters whereas that number used to stand at 283 characters.
Google has been shortening the length of paragraph format Featured Snippets
14 characters might not sound like a lot. In truth, from what I can gather, that comes out to about three fewer words being used within a paragraph Featured Snippet. That’s not an exorbitant reduction, though it still is something to note and consider. However, you have to keep in mind that this is the average. The number of fewer characters is important (and is significant despite boiling down to three or so fewer words) but the trend itself is of equal importance. That is, we have to ask what fewer Featured Snippet characters means qualitatively.
To help us understand just that here’s what a Featured Snippet looks like with 101 fewer characters:
First off, the original Featured Snippet from 2017 was not exactly War and Peace, it was pretty concise to begin with and even stands well under the current and reduced average of 269 characters. This already slim incarnation now looks almost bareboned and doesn’t even run more than three paltry lines. As you all well know, the shorter the content, the more likely it will be read.
Now, are you ready for why the character reduction here really matters?!
I’d like to ask you to go back and read the Featured Snippet from 2017 if you haven’t already because it hardly answers the question that is the query. It’s only in passing that we learn that a dual sports bike is street legal and even so we can only infer that this specific model is street legal.
Compare that to the 2019 version where we are told explicitly that dual sports motorcycles are generally street legal. The character reduction was carried out not because Google thinks shorter content is more readable content. Rather, Google is reducing the number of characters in paragraph Featured Snippets because it is better at dispensing information that directly aligns to the query. Shorter here means a more targeted and more complete answer.
What Shorter Paragraph Featured Snippets Mean for Search Marketers
Google’s goal, as I see it, is to use site content much the way it would implement Google curated content, as a Direct Answer. Take the keyword universe origins, whose Featured Snippet is now 121 characters longer than it used to be. But longer per se is not a problem to the Direct Answer equation. If Google’s goal is to better use Featured Snippets as Direct Answers, that may mean offering more content.
That becomes clear when we actually compare the current Featured Snippet (at the time of this writing) to the previous version for the keyword universe origins:
The 2019 version of the Featured Snippet contains ancillary information to better anticipate the user’s next quest, knowledge of the universe’s size. In this case, and as is ‘most curious’, Google not only pulled content from the main body of the page but from a side box as well so as to offer a more layered answer with content that one would expect to see in a Direct Answer.
While more content within a paragraph Featured Snippet can better help it serve as a Direct Answer, it is not the prevailing paradigm. In general, the more focused and refined the content within a paragraph Featured Snippet is, the better it can serve as a powerfully concise response to the query. Which is why despite cases of longer paragraph Featured Snippets relative to the past, it is not the current trend. Here, as opposed to the list format, concision more or less inherently equates to answer directness and therefore to answer potency.
All of this, of course, and as I laid out in my Search Engine Land article on the future of Featured Snippets, means that users may be less inclined to click on a paragraph Featured Snippet’s URL. (I would say the same for the list format as well, but because it has more, not shorter content. That is, the definition of a potent and complete answer is not related to its concise nature, at least where list-type Featured Snippets are concerned.)
Father Time and the Fading of Featured Snippets
When I say Featured Snippets are fading, I don’t mean fading, as in fading off the face of the SERP. Rather, as time goes on, one would have to assume that Google will get better at Featured Snippet refinement. That means the continuing process of turning the SERP feature into a lean mean Direct Answer machine (for the paragraph form). Featured Snippets are already hard. For the keywords that really matter it’s hard to unseat a URL sitting inside a Featured Snippet. With Google shortening paragraph Featured Snippets and adding more content to the list version as well, traffic may be a bit harder to acquire via the SERP feature.
That said, and as I’ve mentioned elsewhere, Featured Snippets will always be a win. More than that, having your URL placed in front of a user’s eyes in such an authoritative manner (as comes with being at the tippy top of the SERP) is a big brand win! Perhaps with paragraph Featured Snippets coming in at a paltry 269 characters, people will start to talk up the branding advantages of the zero-position box. Perhaps.
Many people are quiet when it comes to SEO for Bing because there’s not a lot of information about it. Funny thing is that many cutting edge technologies and techniques were used at Bing before Google. Fabrice Canel, Principal Program Manager at Bing recently shared a load of information with Jason Barnard of Kalicube about how not just Bing works but in general how search engines work as well.
Criteria for Indexing Content
Fabrice is in charge of Bingbot Crawler, URLs Discovery and Selection, Document processing, and Bing Webmaster Tools. He’s a good person to turn to for information about search engines, particularly crawling and page selection.
Fabrice here describes the crawling process and what I feel is the important takeaway is how he says Bing is picky about what it chooses to index.
A lot of people feel that every page of their site deserves a chance to get ranked. But both Google and Bing don’t index everything.
Continue Reading Below
They tend to leave behind certain kinds of pages.
The first characteristic of a page Bing would want to index is a page that is useful.
Screenshot of Jason Barnard
Fabrice Canel explained:
“We are business-driven obviously to satisfy the end customer but we have to pick and choose.
We cannot crawl everything on the internet there is an infinity number of URLs out there.
You have pages with calendars. You can go to next day forever.
So it’s really about detecting what is the most useful to satisfy a Microsoft Bing customer.”
Continue Reading Below
Bing and Key Domains
Fabrice next talks about the concept of Key Domains and how they are guided by key pages on the Internet to show them the quality content.
This kind of sounds like an algorithm that incorporates a seed set of trusted sites from which the further in distance a site is from the key websites the likelier it is to be spam or useless (Link Distance Ranking Algorithms)
I don’t want to put words into Fabrice’s mouth, the above is just my observation.
I’ll let Fabrice speak for himself.
“Would you say most content on the web is not useful or is that exaggerating?”
“I think it’s a little bit exaggerated.
We are guided by key pages that are important on the internet and we follow links to understand what’s next.
And if we really focus on these key domains (key pages), then this is guiding us to quality content.
So the view that we have of the internet is not to go deep forever and crawl useless content.
It’s obviously to keep the index fresh and comprehensive, containing all of the most relevant content on the web.”
What Makes Bing Crawl Deep into Websites
Jason next asks about websites that get crawled deeply. Obviously, getting a search engine to index all of the pages of a site is important.
Fabrice explains the process.
“Right. And then I think that’s the key. You prefer going wide and going deep.
So if I have a site that’s at the top of the pile, you will tend to focus more on me than on trying to find new things that you don’t already know about?”
Fabrice provided a nuance answer, reflecting the complicated nature of what gets chosen for crawling and indexing:
“It depends. If you have a site that is specialized and covers an interesting topic that customer cares about then we may obviously go deep.”
Machines Choose What to Crawl
We sometimes anthropomorphize search engines by saying things like “The search engine doesn’t like my site.”
But in reality there’s nothing in algorithms that are about liking or trusting.
Continue Reading Below
Machines don’t like.
Machines don’t trust.
Search engines are machines that are essentially programmed with goals.
Fabrice explains about how Bing chooses to crawl deep or not crawl deep:
“This is not me selecting where we go deep and not deep. Nor is it my team.
This is the machine.
Machine learning that is selecting to go deep or deeper based on what we feel is important for a Bing customer.”
That part about what is important for the customer is something to take note of. The search engine, in this case Bing, is tuned to identify pages that are important to customers.
When writing an article or even creating an ecommerce page, it might be useful to look at the page and ask, “How can I make this page important for the those who visit this web page?”
Jason followed up with a question to tease more information about what is involved in the selecting what’s important to site visitors.
Continue Reading Below
“You’re just giving the machine the goals you want it to achieve?”
The main input we give the the Machine Learning algorithms is satisfying Bing customers.
And so we look at various dimensions to satisfy Bing customers.
Again, if you query for Facebook. You want the Facebook link at the top position. You don’t want some random blogs speaking about Facebook.”
Search Crawling is Broken and In Need of an Update
Jason asks Fabrice why IndexNow is helpful.
Fabrice responds by stating what crawling is today and how this method of finding content to index, which is nearly thirty years old, is in need of an update.
The old and current way of crawling is to visit the website and “pull” the data from the websites, even if the web pages are the same and haven’t changed.
Search engines have to keep visiting the entire indexed web to check if any new pages, sentences or links have been added.
Continue Reading Below
Fabrice asserts that the way search engines crawl websites needs to change because there’s a better way to go about it.
He explained the fundamental problem:
“So the model of crawling is really to learn, to try to figure out when things are changing.
When will Jason post again? We may be able to model it. We may be able to try to figure it out. But we really don’t know.
So what we are doing is we are pulling and pulling and crawling and crawling to see if something has changed.
This is a model of crawling today. We may learn from links, but at the end of the day, we go to the home page and figure it out. So this model needs to change.”
Fabrice next explained the solution:
“We need to get input from the website owner Jason and Jason can tell us via a simple API that the website content has changed, helping us to discover this change – to be informed of a change, to send the crawler and to get latest content.
That’s an overall industry shift from crawling and crawling and crawling and crawling to discover if something has changed…”
Continue Reading Below
The Present State of Search
Google tends to call them users, people who use their site. Bing introduces the concept of people who search as customers and with that all of the little aphorisms about customers that are implicit in a customer-first approach such as the customer is always right, give the customer what they want.
Steve Jobs said about customers in relation to innovating, which relates a bit with Bing’s IndexNow but also for publishers:
“You can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.”
The Future of Search is Push?
Bing has rolled out a new push technology called IndexNow. It’s a way for publishers to notify the search engines to come crawl new or updated web pages. This saves hosting and data center resources in the form of electrical energy and bandwidth. It also makes it easier for publishers to know that the search engine will come and get the content sooner with a push method rather than later as in the current crawl method.
Continue Reading Below
This is just a portion of what was discussed.
Watch the entire interview with Fabrice Canel
Source link : Searchenginejournal.com
The In Search SEO Podcast
The In Search SEO Podcast Community Question of the Week!
One local SEO tip to rule them all! Who has the most awesome local SEO tip in the land?!
Summary of Episode 33: The In Search SEO Podcast
This week we are going local again with the most electric personality within the world of local SEO the great, the grand, the gifted Greg Gifford who comes to discuss:
- From copy to images; how to optimize your Google Posts
- How to use Google Posts to promote your business
- What mistakes you should avoid when working with the Local Panel’s Q&A feature
Plus, we talk about the need for uniformity within the SEO industry.
Standards of Reference for the SEO Industry? A Crazy Notion! [04:04 – 22:12]
Something happened last week that really bothered Mordy. As you may know, we here at Rank Ranger are nuts about SERP features. We **** them. We **** looking at them, we **** seeing what’s new with them, and we **** tracking their display levels on the SERP!
Tracking these SERP features and reporting on how often they show up on the SERP is not easy. For one thing, it’s not as if Google has a giant neon sign that says, “THIS IS A FEATURED SNIPPET” or “THIS IS AN EXPLORE PANEL.”
Separating out which feature is which when they are so close to each other is not easy. Think about it for a second. Think of mobile. On mobile, you have Direct Answers that are attached to Knowledge Panels. So how do you classify the feature… is it a Direct Answer? Is it a Knowledge Panel? Is it both? And if it’s both do you really want to count it twice? Once as a Direct Answer and once as a Knowledge Panel? Probably not.
It’s complicated, it’s not easy, and there is NO STANDARDIZATION.
Here’s the perfect case to illustrate the problem. Last week, Dr. Pete over at Moz caught a spike in Knowledge Panels with attribution. In other words, there is a feature that looks like a Knowledge Panel but has a link that looks like a Featured Snippet. And there’s our classification problem right there. At Rank Ranger, we call this an Explore Panel based on the HTML Google presents to us.
So Dr. Pete said he was seeing more of them. Meaning, he saw more SERPs with Explore Panels and said that the feature appears on about 2 – 3 % of all SERPs. Which, in terms of the display level per se, is exactly how many we see!
So what’s the problem?
Why is this so bothersome?
The problem occurred when Search Engine Land did an article about the sudden increase in the Explore Panel’s appearance on the SERP and asked Mordy to comment, but of course, we didn’t see a spike, so Mordy said, “There is no spike.”
The problem was that Mordy’s comment came off like we were contradicting Dr. Pete and questioning his data and assessment which was the total opposite of Mordy’s intention. There are a ton of reasons why there would be a discrepancy between the datasets!
And this is Mordy’s pet peeve. It’s not that we track this SERP feature and Moz tracks this SERP feature and that’s that. The issue is we don’t all have the same data. There is a divergence in our data sometimes. They might have a display level of this at one time and we might have it at a different display level. There are features where the two of us generally align and features where we diverge a bit as to their level of display on the SERP.
How is that possible?
Because we track different data sets. Moz, if Mordy’s not mistaken, tracks a data set focused on high volume keywords (queries that are searched for more often) and we track a more normalized set of keywords (a mix of high volume and low volume). There are advantages and disadvantages to both. There’s no one right way!
On top of that, as we mentioned, it’s not easy to classify some of these newer hybrid features because they are hybrids. What we may classify as an Explore Panel could conceivably be classified as a Featured Snippet or a Knowledge Panel, etc.
And that’s Mordy second pet peeve. There’s no uniformity! Here’s another story. A while back Mordy sent a spike in the PAA feature to Barry Schwartz and Barry asked, “Why do you guys call them Related Questions? Why not People Also Ask!” It’s because in the HTML that’s what Google calls the feature, Related Questions. So it says ‘People Also Ask’ on the SERP and Related Questions in the HTML and people use both. That’s pretty annoying!
It gets even better. In that Search Engine Land article on what we call Explore Panels, Dr. Pete called them Knowledge Panels with attribution while Search Engine Land called them explore cards! So now you have three different authorities on the matter calling it three different things – that’s a problem!
The problem is there is no consensus. We’ll go so far as to say there is no REAL communication within the SEO industry.
And we’re not talking about Twitter counting as a conversation. What we’re talking about is having the leaders in the industry who work on classifying Google behavior, such as SERP feature classification and beyond meeting to hash out a standardized consensus.
We need this so that we can offer the industry information that’s a bit more uniform. Now, for some of the reasons we mentioned earlier, such as the difference in the datasets, this will never be 100% uniform and people need to understand that, but let’s at least get the foundation right.
There is a need to create some standards here. By the way. We are not talking about standards of practice. That’s something totally different that we don’t want to touch with a 100-foot pole let alone a 10-foot pole. We have no interest in solving the world’s problems here like subfolders vs subdomains!
As Bill Murray said in What About Bob… baby steps!
Getting Google Posts & the Q&A Feature Right: A Conversation with Greg Gifford [22:12 – 01:03:44]
[This is a general summary of the interview and not a word for word transcript. You can listen to the podcast for the full interview.]
Mordy: Welcome to a very special In Search SEO podcast interview. I want you to put your hands together for the only person I know who can make Encino Man his lead slide in a presentation at one of the industry’s biggest SEO conferences. You already know who I’m talking about… he is now the VP of Search at WikiMotive, he is Greg Gifford.
Congrats on the new gig and welcome to the show!
Greg: Thanks! You know what’s great is that I usually use the movie theme song in my slides but that was the first time I used Encino Man as the title slide. Usually, it’s my sub-title slide when I’m talking about links. And it’s always funny when it pops up as there will be several hundred people in the room and only 10 people laugh.
M: It’s pretty clear that you’re a pop culture freak… I don’t mean that you’re a freak, I mean that you freaking **** it.
So I have a few questions if you’ll indulge me…
Favorite 80s movies.
G: They Live.
M: Favorite 80s band.
M: Favorite 80s song.
G: Wake Me Up Before You Go-Go by Wham. It’s the perfect cross-section of 80s pop. Five minutes from now I might change my mind but for now, I’m going to go with that.
M: Great. Let’s go deep into Google Posts and the Q&A Feature. How you write for any sort of content is going to vary depending on what you’re writing. How should you write for Google Posts?
G: So a lot of people only pay attention to what shows up in the full post and not what shows up in the thumbnail. What you need is to approach the thumbnail view almost like a search or display ad where you have to write something compelling that will grab everyone’s attention so they’ll click to see the full post. Who cares if you wrote a 1500 character post if they’re not compelled to read the first 100 characters? You have to be really careful with what comes up in your first sentence or two because that’s what will be in your thumbnail and that’s what will get people to click on it.
M: There’s been a lot of talk about what images and image sizes work best for Google Posts. I myself have heard at least 3 different recommendations on it. Can you settle this once and for all? What sized images should you use in Google Posts?
G: See, the problem is when Posts first came out they had a different dimension size. So a lot of people immediately made videos and blog posts saying “this” is the size and no one really updated it since. We did a lot of testing after it was changed and the ideal size now is 1200×900 pixels. That’s the best size that will fill the window and is the closest to what will appear so you can have a little more control over what’s visible after it’s cropped.
M: There was a survey that came out a few months ago asking SEOs if they believe that Google Posts affect ranking? There was a major split in the community whether they do or not. What are your thoughts?
G: I don’t personally think it does. There are several local SEO experts that say it does influence ranking. I even used to say it did but after doing tests I don’t think it does anymore.
M: Let’s talk conversions. There are so many CTAs that are available for you to add to your Google Posts. Are there some that are more effective than others? When should you use one over the other? What’s your general advice for CTAs?
G: There are four different post templates and the template you use depends on the messaging in the actual post. There are positives and negatives to each. We tend to like the What’s New post the best because that gives you the most text and thumbnail view. For all of the other post types, you’re going to lose an additional line.
To explain, you get four lines of text without the CTA and whenever you use a CTA you’re going to lose a line of text. But of course, you want a CTA because it will be silly not to. So with the What’s New post you get three lines of text and with the others, you get a Title and a **** range/price range which results in only one single line of text left. It’s tough to write something compelling with such few words. Sometimes there are reasons to have the offer or the **** range but most of the time you want to use the one post type that will give you the most visible text with three lines.
When choosing your CTA buttons you can choose from Order Online, Buy, Book, Call, and Learn More. Choosing a button is based on what you’re looking for. Typically we use the Learn More button because when I was working in the automotive industry the other buttons didn’t really fit for a car dealership. I like teasing users with the Google Post and the little thumbnail view. They click into it and maybe they’ll read it, maybe they won’t. I don’t like using 1500 characters. I like a more quick and condensed version of why we kick ass with the CTA to click here to learn more. It then will take them to a landing page of your site where you have full analytics of what they do.
There might be situations where Call Now or Sign Up might work and that Sign Up button leads to a lead form. It depends on your messaging in the post.
M: I have two questions for you on that. Do you have any data if people are clicking on the post or do they just scroll through?
G: Again, it really depends on what you’ve got in that thumbnail view. In my previous job, we had dealers that were getting 50-60 clicks a week on posts and that’s to the website. It really depends on your messaging. Sharing a blog post or anything generic and fluffy doesn’t work and people won’t care. But if it’s an oil change special or a tire rotation special or whatever special is running at the service department is great. Saying there’s a limited time offer on something you need is what gets people to click on the full details and then click through to the site.
So there’s no way to really tell how many people actually clicked on the thumbnail to view the full version but if you’ve got 50 or 60 people who clicked to your website there had to be more who saw the offer. And if you’re getting conversions of two or three a week then that’s killer as that’s free advertising.
There are definitely people we did posts for that didn’t get any clicks and didn’t see any improvement in ranking visibility. I don’t think anyone definitively proved that they’re doing posts which lead to better rankings. We all know that people don’t sit down and buy from the first business they find. They’re going to do research, see various competitors, read reviews, and all that. And if you’re the only one amongst your competitors doing Google Posts then it’s a no-brainer.
M: Yeah, and it’s not like there’s much output needed. You write a post, add an image, add a CTA, and you’re done.
G: Yeah, if you streamline the process it should only take less than 10 minutes a week to put up a post.
M: The second question I wanted to ask you is you were talking about using the Learn More button. Is this more of a top funnel element? Are you going to get people to buy from your Google Posts or are they going to research more?
G: Of course. As I said, we had people click through for the service specials. We found promotional offers work best. You have to think of the fact that your Google My Business listing is your new home page. All of the people who used to go to your website to get your phone number, to get your address, to read testimonials, and to look at pictures can do all of that within your Google My Business profile. It’s the perfect example of zero-click search. Most of these people are early funnel traffic, it’s people who haven’t been to your site yet so it’s a great place push out those promotions.
For things like cars, people will definitely visit the website but for smaller dollar things people don’t have to. You can book a table now straight with Google. So posts are another way of grabbing a little of that traffic from Google and bringing it to your site.
M: Yeah. It’s surprising to see how so many businesses aren’t utilizing Google Posts. The very fact that with Google Posts and the Q&A feature that you can put your own content on the SERP is amazing.
G: Exactly. It’s a direct interface to the backend of Google to give them significant information about your entity. Knowing that Google moved to entity-based search and seeing people not use Google Posts and Q&A to feed entity information directly to Google just blows my mind.
M: One thing I don’t see a lot of are videos in Google Posts. Why is that?
G: Yeah, this was a recent change last year to add videos to Google Posts. It’s kind of cool. It’s a little bit tougher because when you’re dropping in a still image you can really control what you show but for video, you get the still image with a Play button on it. I don’t think visually it stands out as much. So they are cool but I haven’t yet seen a very great implementation of using a video instead of really compelling text and a good image.
M: Let’s talk about Event posts. If I’m not mistaken, other posts only stay for 7 days while Event posts stay until the event is over. But you don’t recommend going for the longer stay on the carousel, correct?
G: It depends on what your event is and what you can write about. Again, it comes down to getting three lines of text above your call to action link. If you use the Event post, you get the title for the event in the top line, a **** range on the second line, and a single descriptive line on the third line.
Let’s say you’re doing Truck Month at your dealership. The title can be, “This Month is Truck Month.” it lasts from the 1st to the 31st, then one line of description, and then a Learn More so people can learn the specials. The difference is you can separate those three lines because character breaks don’t show up in the thumbnail view. If you use line breaks they work in the full post view but it will mash everything together in the thumbnail view. So they are cool with your three separate lines of an event title, a **** range, and a descriptive line. But it’s better to have an image that has the title then you can say, “Limited time only! Just runs in the month of July.” Have a little bit more compelling text if you’re going to use all three lines. And if seven days pass just remember to go back and put the post up again.
You really have to approach it like AdWords. Maximize your best possibility of clicks by putting the best test possible in that thumbnail view. If you can do that with an Event post then great but most of the time you can do better with two or three lines and are creative with what image you use.
M: So what else would you recommend, consider, or think about when creating Google Posts?
G: It’s really important to understand where the image is going to be cropped. The biggest mistake we see is images of objects or text where half of it is cut off. You have to be really careful as it doesn’t crop consistently. It’s not a center crop, as most people think, it’s slightly above vertical center. And the crop is different between desktop and mobile because the image size is different. You can even put up a single post and come back a day or two later and the crop would have changed. You have to be really careful with what’s in your image and everything that’s important that you want to be seen is visible once when you upload it.
M: I know we talked about language within Google Posts, but let’s move to the Q&A feature seen in the Local Panel. What are some things to think about linguistically when working with the Q&A feature in the Knowledge Panel?
G: You want it to be as easy-to-read as possible. You don’t want to be incredibly verbose. And the biggest bonus is that you can put in your own questions and make a pre-site FAQ page. There’s this new feature on mobile that as you start to ask a new question it will auto-suggest answers based on similar questions or something similar to a past review. You’re feeding information into Google’s database about your entity and in the future when people ask the question instead of them asking the same question a million times and waiting for an answer Google will give them that answer immediately because you already put in that answer. It’s very powerful.
We did a research project last year where we went through around 640 questions asked in the span of eight months to a bunch of dealerships. One of the interesting things we found was that 40% of these questions were leads. Meaning, they would have resulted in a sale had they been answered. And out of the 640 questions asked only two or three of them were actually answered by the business. That’s because most people don’t know these questions are out there.
M: But don’t businesses like these have SEOs so why wouldn’t they notice?
G: So with businesses without SEOs it makes perfect sense why they don’t notice. It’s crazy though how many SEO agencies are out there that do good SEO but don’t do local SEO. They’re not paying attention to Google My Business as much as they should. Any business that has a physical storefront where any of their keyword searches will be pulling up that map pack should be doing local SEO. If you’re not following the right people on Twitter, reading the right blogs, hearing the right podcasts, then you’re not going to know about the Q&A updates or that it’s there at all. Most people aren’t looking at the Google My Business panel, they’re just looking at the dashboard on the back end.
M: That’s why it pays to do vanity searches.
I want to harp on something you said about the auto-suggest for Q&A. I heard a rumor that Google is slowly moving to only show reviews in the auto-suggest. Does that make sense?
G: I have not heard that rumor and I don’t think it makes sense. Obviously, it’s crowdsourced where you expect anyone to answer but it’s better when the business owner answers. It will make sense if it changes priority to show information from reviews but if none of the reviews are relevant to the question then it will obviously have to feed the answer from a similar question.
M: With that, other than not knowing about Q&A all together. What are mistakes you’ve seen made when people try to engage with the feature?
G: Mistake number one is not loading in your own questions. The second mistake is not knowing about the way that answers are populated in the results. The upvote system is going to determine what shows as the primary answer. So just because you answered a question as a business owner it doesn’t mean it will be shown as the primary answer. You got to make sure to pay attention to ensure your answer is the definitive answer by giving it the most thumbs up.
M: How long will it take Google to figure out that people are just upvoting their answer to the top?
G: Well it is there for a reason. It’s a crowdsourced feature. It’s a community discussion of people asking questions and anyone else can give the answer. So it makes sense for people to upvote their answers. You don’t want people to have to go through hundreds of answers to find the right one. You want the community to upvote the best answer because it’s a community feature.
Another thing people don’t realize is that questions can be upvoted as well. If a question gets three upvotes it will show up natively in the Google My Business panel.
M: Before we move on I have to ask about Google My Business monetization. There was a survey out there that seems to imply Google is going to turn GMB into a pay to play arena, at least to an extent. To me, a lot of the changes Google has made to its local features create a much stronger dependence on those features. For example, having a greater ability to showcase your products in the Local Panel could make a business more dependent on the feature and more willing to pay for it in the future.
Has Google been gearing up towards GMB monetization for some time now or are we blowing the whole thing out of proportion?
G: It’s safe to say it’s going to happen at some point. I’ve been saying for five or six years that at some point they’re going to start charging for features. I don’t think it’s ever going to be pay to play. I think it will always be a free feature but I think there will be a paid route where there will be things which you can pay for to enhance your profile.
There were about 20 features they mentioned with five or six that already exist in the wild so it makes sense for them to roll them out for Google My Business. Some of the 20 features though are outrageous like getting leads from competitor profiles or pay to remove ads from your profile. They didn’t explain what this means. If they’re talking about competitor ads I don’t think Google will go down that route. What I think makes sense is if you have a Groupon ad that offers a discount to your restaurant, for example, you wouldn’t want people searching for your brand to receive that discount as they’re coming to your business anyway. So maybe Google will remove those Groupon ads if you pay them. There was also the feature where you can pay Google a fee and get support from Google. It’s going to be very basic support and very likely to be outsourced overseas. It’s just not feasible for Google to staff something like that.
Optimize It or Disavow It
M: Assuming you have to do one or the other, which is the lesser of the two evils… a horrible image within Google Posts (unattractive, off-center, you name it) or a totally out of left field CTA that makes no sense for the post?
G: If I had to do one I would have to do the out of left field CTA.
M: Because the image has more visibility?
G: Yeah, because if you have an awful image it’s going to hinder you from getting clicks whereas a bad CTA isn’t going to hurt as much. If you have a gorgeous image with a really compelling description the user will still click.
M: Well thank you for coming on. I really appreciate it.
G: Yeah, thanks for having me.
SEO News [01:06:47 – 01:10:15]
Google Knowledge Panel Was Found Without Attribution: Uh-oh, Google showed a Knowledge Panel without a link! Test or Bug? Google said it was a bug! Phew!
Google My Business Questions Experiencing Backlog: Have a Google My Business question or problem? Get in line, a long line. Google has indicated that it has a backlog of 2-3 weeks!
Coverage Report in Search Console Was Showing Delay: Yep, another Google Bug to Search Console. The Coverage report was showing a 10-day delay. On June 25th, Google said the bug was fixed and would investigate the matter.
Mobile SERP Favicons Don’t Appear on Every Browser: Google says that mobile SERP Favicons do not appear on every browser. For example, they do not appear on Firefox Mobile on Android.
Google Deprecates Social Profile Markup Support: Google will no longer use social profile markup to decide on the social profiles that should go into a Knowledge Panel.
Search Console Adds Exact **** of Switch to Mobile-First Indexing: Search Console will now tell you if either the mobile or desktop Googlebot is crawling your site. With that, Google will tell you the exact **** it switched from using the desktop bot to the mobile bot!
SEO Send Off Question [01:10:15 – 01:14:08]
If Google had an Instagram account… what would be the first image it would share?
For Sapir, Google will probably upload a story tagging Bing and Baidu and caption it with, “Me and my *****,” but it’ll be shady like those ***** that tag their friends in photos even though they’re the only ones who actually look good.
Mordy’s response was in honor of OJ now having Twitter and with Google being accused of showing a Knowledge Panel without attribution. Mordy said it will put out an image of OJ wearing the gloves in court with the caption, “If the gloves don’t fit you must acquit.”
Tune in next Tuesday for a new episode of The In Search SEO Podcast.
True Automated Income From Home Don’t Miss This Offer.
True Automated Income From Home Don’t Miss This Offer.
Google’s John Mueller was asked in an Office-hours hangout about a bad outcome from using Google’s link disavow tool. They uploaded a disavow file and within days their rankings collapsed. Mueller answered the various questions and then addressed the timing between uploading the disavow and the ranking changes, in the process revealing how long it takes for Google to work the contents of a disavow into the algorithm.
Disavow Tool Caused Ranking Collapse Within Days?
The person asking the question asked several questions that reflected various theories of why a disavow file might trigger a complete collapse in rankings.
Chief among these ideas was whether using the disavow is in itself a negative signal and how long does it take to get rid of a “black mark” for using the disavow tool.
Here is the question:
“Does using the disavow tool raise a flag in the algorithm and trigger a soft penalty on a website for possibly engaging in link building in the past?
We’ve used this tool to remove hundreds of spammy links and our site collapsed a few days later.
Should we remove the disavow tool and how long will it take for a site to return to normal traffic and ranking?
Or is there a permanent black mark against this website for using the disavow tool?”
Continue Reading Below
Google’s John Mueller Discussing Ranking Effect from Disavow Tool
No Penalty for Using the Disavow Tool
Google’s John Mueller affirmed that there was no penalty for using the disavow tool.
“No there is not any kind of penalty or black flag or mark or anything associated with using the disavow tool.”
He then went to assert that the disavow tool is a technical tool for indicating links that the publisher doesn’t want to be associated with their site within Google’s systems and that’s all.
He also assured the publisher that the links in a disavow tool doesn’t indicate past bad behavior because a lot of times those are just links publishers are worried about.
“And it doesn’t mean that you created those links. It can be something that you found where you’re really worried that Google might get the wrong picture for your website.”
Continue Reading Below
No Need to Use Disavow Tool for Random Links
Mueller next reassured the publisher (as he has many other times) that the disavow tool doesn’t need to be used for random links that are discovered.
He did recommend the use of the tool for links that look like the publisher might be responsible for.
“But if you’re seeing something where you’re saying, well I definitely didn’t do this and if someone from Google manually were to look at my website they might assume that I did this, then it might make sense to use the disavow tool.”
A Disavow File is Not an Admission of Link Building Guilt
Mueller again reassured the publisher that filing the disavow is not assumed by Google to be an admission of past wrongdoing in terms of link building.
“…it doesn’t mean that you did it or it’s not a kind of a sign that oh you’re admitting that you’re admitting that you were doing link games in the past.”
He next reassured that there’s no kind of memory of past wrongdoing once a site has cleaned up from a manual action.
“From our point of view if you’ve cleaned up an issue then you’ve cleaned up that issue. With some kinds of issues it does take a little bit longer for things to settle down just because we have to reprocess everything associated with the website and that takes a bit of time.
But it’s not the case that there is any kind of like a grudge in our algorithms that’s holding back a site.”
Does Disavow Tool Impact SERPs Within Days?
This is the part that I personally find the most interesting. I’ve had many people claim that the disavow tool works because in their actual experience they saw rankings change within days of uploading a disavow file, which proves that the disavow tool really works for improving search results.
Who can argue with the proof found in that kind of pudding, right?
Continue Reading Below
An SEO does this, Google rankings jump, a simple correlation, right?
This is what John Mueller says about how long it takes to process Disavow Tool data into rankings:
“With regards to this particular case, where you’re saying you submitted a disavow file and then the ranking dropped or the visibility dropped, especially a few days later, I would assume that that is not related.
So in particular with the disavow file, what happens is we take that file into account when we reprocess the links kind of pointing to your website. And this is a process that happens incrementally over a period of time where I would expect it would have an effect over the course of… I don’t know… maybe three, four, five, six months …kind of step by step going in that direction.
So if you’re saying that you saw an effect within a couple of days and it was a really strong effect then I would assume that this effect is completely unrelated to the disavow file. …it sounds like you still haven’t figured out what might be causing this.”
Continue Reading Below
Just Because it’s Obvious Doesn’t Mean it’s the Answer
Just because you see something obvious that jumps out doesn’t mean that it is the reason for whatever it is you are trying to understand.
Obvious only means that it is easily seen, that’s all.
Meanwhile the real explanation might be something that isn’t easily seen.
In this case the person filed a disavow and the rankings changed within days and there was no correlation, none.
Just a coincidence.
Mueller confirmed that it takes months for the data in a disavow file to work its way through the algorithm to the rankings and it happens incrementally. Incrementally means in small batches.
How Long it Takes for Disavow Tool to Work
Question at 17:57 minute mark and the answer is at the 20:50 minute mark:
Source link : Searchenginejournal.com