Featured Snippet Churn Documented: What it Means for Your SEO Strategy
If you’re in the search industry, you’ve likely noticed featured snippets where the site used as the source of the information changes over time. It’s clear that Google is going through some type of optimization process designed to improve the overall results. In our latest study, we examine the general stability of Google’s featured snippet results.
To measure just how active Google’s testing may be, we took 4,999 queries that we’ve seen provide a featured snippet in the past. We checked these for 124 consecutive days using STAT Analytics to see how often they changed over time. A change could include showing a knowledge graph instead of a featured snippet, showing neither, or swapping out the source site.
Basic Study Results
The graph below shows the percentage of time over the 124 days of our test that queries showed a featured snippet. So the left-most bar shows the number of queries that displayed a featured snippet for 100% of the test period, the next bar those that had a featured snippet for 90-99% of that time, and so forth.
Let’s look at desktop first:
Then, for mobile:
Interestingly enough, 522 of the queries never showed a featured snippet during this test, though during our past studies we’ve seen featured snippets on these queries.
Knowledge Graph results were far less frequently shown. 4,541 of the queries never showed a Knowledge Graph result. For that reason, the bar chart below shows a breakout indicating the percentage of time that queries showed a Knowledge Graph result, only among those queries that showed a Knowledge Graph result at least once.
First, we look at the desktop:
And next, we look at mobile:
We also examined the domains that tend to get the most featured snippets. The top 10 domains, in order, were:
Note the differences between desktop and mobile. Not only do we see the domains appearing in a different order, but education.jlab.com shows up only on the desktop side of things (but is replaced by history.com for mobile devices).
Wikipedia is certainly the 800-pound gorilla here, showing up for 40% of the featured snippets on desktop and 36% of the featured snippets on mobile. The top ten domains captured 47% of all featured snippets on desktop and 41% on mobile. This means that a lot of opportunity remains for other sites to earn featured snippets!
Featured Snippet Churning Study Results
425 of the keywords we tested never showed a featured snippet on desktop, and 888 never showed a featured snippet on mobile. This is in spite of the fact that our prior studies have shown these to have featured snippets. 1567 of the keywords on desktop used more than one site for returning a featured snippet during the course of the study.
The keyword that showed the most variability was, “what is day student tuition.” During the course of the study we saw 12 different domains used to provide the featured snippet for this keyword. This keyword also went 8 days without any featured snippet. Here is the breakout for the number of domains used per keyword, for both desktop and mobile:
Based on our data, we see lots of churning of featured snippets. In addition, it looks like it happens less on mobile than it does on desktop. Let’s take a look at the situation with Knowledge Graph results next.
During the course of the study, the quantity of Knowledge Graph based results ranged from 260 (5.2%) to 329 (6.6%) with an increase over time. Since Knowledge Graph results are not sourced from a particular site, churning here means days with or without a Knowledge Box or Knowledge Panel result for the query. 165 of the keywords showed a Knowledge Graph result for every day of the study, and 156 of them only showed a Knowledge Graph 50% or less of the time during the study.
Last, but not least, let’s examine the raw data on featured snippets and Knowledge Graphs by day, across the entire study:
In the mobile world, there were 468 keywords that show a Knowledge Graph result every day of the study, and 561 that show a Knowledge Graph result 50% or less of the time. As for the day by day view, here is what we see for mobile:
One notable area from the above charts is that mobile shows more Knowledge Graph results than desktop.
Let’s look at this in a bit more detail:
Now the same data for Knowledge Graphs:
Here’s an example of how this happens. We’ll start by looking at the SERP for a query on desktop:
Next, we’ll look at the mobile result:
What’s happening is that Google prefers to show trimmer results on mobile, so it creates a smaller Knowledge Graph-based result from the Knowledge Panel shown on desktop, and ignores the featured snippet result.
What this data shows is that Google is still working very hard at testing featured snippets and Knowledge Graph results. As I documented in our 2016 study of 1.4M search results, it looks like this is a dynamic testing process. This further demonstrates how important it is for Google to get these to be as accurate as they possibly can.
In voice environments, Google will not generally be able to show more than one result when the search result is spoken in reply to a user’s voice query. As a result, they need to have the absolute best answer with that first result as often as they possibly can.
For publishers, learning how to earn featured snippets remains an imperative. This is best done by focusing on user value, because that’s what Google is doing. You must be prepared for continued volatility, but keep on experimenting. The worst that can happen is that you will add value to your site and improve the experience of your users.