Have you ever strived to reach the pinnacle of a search term and for no reason it just disappeared? This is commonly known as the “Everflux” or “The Google Dance” in SEO. Although it can be a source of stress for many (as their rankings momentarily dip south on hiatus) why does it really happen and how necessary is this function to maintain quality in the Google search index? Here is my take on this.
It is no secret that Search engine Indexes churn and qualify content and links, but have you ever considered that indexes are often rebuilt (right under your eyes) as other searches are being executed.
Out to Lunch:
Does the search index take a break on Friday to make room for the new entrants for specific keywords? It would appear so. With millions of new sites being created and added to the weekly crawl for retrieval each week across the web, the ebb and flow need to make room for change, yet maintain order through redundancy, in the present tense. Think of it like the analogy of a running backup that feeds into the real time /main index, “like an ocean flowing into a lake” which eventually feeds back into the ocean.
Search engines must constantly crawl and update their index to maintain relevance or the user experience suffers. Without spiders investigating sites and updating the index, we would constantly be presented with stale data.
Case in point, my theory is, until the algorithm shakes out the new entries and cross references those with immunity from the possibility of extinction, there is definitely another round of due diligence each competitive search term must face in an alternative testing grounds to maintain its visibility and longevity for the main search index (until it reaches a state of authority/immunity).
I am open to a more detailed explanation of this algorithmic function if someone elects to share their grasp on this phenomenon such as a search engine engineer that can describe this compiling / decompiling / inclusion or elimination function? I welcome and encourage any feedback on the topic, and how it pertains to the search index.
The reason this post was written about amnesia and threshold redundancy is, after spending countless hours of each day analyzing traffic patterns and search engine optimization results, you start to gain a sense of when you last saw rankings and patterns when they appeared in the search index for different data centers.
I recall a time last year, when the bottom fell out of various search results and it seems that the new ranking algorithm (from what trails it leaves behind) is continually in a state of self absorption, alignment and self adjustment. The bottom line of this function is clear, quality control and relevance.
For example the word company was not valued the same as the word services as a modifier when this occurred. Seeing search results that had hundreds of millions of competing phrases reduced to 1 million results was needless to say alarming to most.
The idea that each word has a unique threshold even though both imply similar connotations shed even more light on how popular search terms have their own unique rules and how search engines interpret and display results.
For example, a keyword used in conjunction to the word company scored lower for competing pages and showed a few hundred thousand search results instead of a few million. Essentially, the index was essentially compressed and rebuilt over the course of weeks. This was also concurrent to universal search being injected and released, so one can only speculate as to the true cause behind such effects.
Recently there was another anomaly, case sensitive search made its debut in Google where a search query using capital letters at the first letter of each word returned one result and the same query using lower case letters produced two distinct search results. Variations of this new algorithm are still running as we speak to a slighter, more manageable extent for searchers to ingest.
I am going out on a limb here, which is why this is categorized in the search engine optimization myths section of our blog. Yes, this is a facet of the Google Dance, but catching it in mid phase I must say was like reading Monday’s headlines on Sunday afternoon (like catching a glimpse of things to come).
Since I am not a computer science major, nor am I a search engine engineer, this is probably another day at the office to them with a succinct description, from the eyes of an observer I need to build my case and elaborate my findings. I was however able to find a very informative piece on this phenomenon and included the link to Markus Sobek’s expert opinion regarding “the Google Dance” which is an absolute must-read post for further elaboration.
So here it is, my hypothesis is that on Friday several of the major data centers in Google punch their ticket for the week and essentially put up a secondary index so they can tally the results of new data gleaned from new sites and assess their quality score.
During the next 48 – 96 hours (according to this theory) the main index runs a battery of tests on pages that rank for competitive phrases (to in essence, cross-references and tests their links and various on and off page factors against other data centers). This could be done from using spiders armed with different algorithms and crawl rates to create a site profile.
Then, after integrating all of the new sites and their information crawled over the week, the real time index combines this data with the older snap shot and then creates a semblance of the previous and current data to pool search results (typically about 48-96 hours). It is almost as if it virtually deconstructs and reassembles itself to see which results (have authority) and remain intact.
The last one for the keyword I was studying lasted 14 days the keyword was “SEO”, then in the middle of the night, the switch flipped (last Sunday June 15th) and on Monday, the choice ranking was gone. Is this a feature where through the process of elimination, only the most relevant results survive? Almost like a trust-rank / relevance bot that potentially passes or flunks each page it grades and determines if the page graduates to a more stable parameter or is purged from the index or into a secondary or supplemental index (in safe keeping until it can prove value).
From my experience, the results return to their position from before the fall if your site emerges from the gauntlet unscathed. It is nearly Wednesday (roughly 70+ hours from the time they disappeared) and the search results are stabilizing for the target phrase “SEO” with an additional 2 Million results that made it (from the inflated surplus of sites targeting the keyword).
To reiterate, this is merely my hypothesis on one facet of the search algorithm, and why would I suspect such a thing, because I took a snapshot when one of such operations was compiling. The term SEO which usually has 255,000,000 competing results, shifted to over 1 billion (see attached photo).
Is this search engine amnesia and the process of redundancy or rediscovery? Could this be the effect of multiple data centers comparing results? or is it just business as usual in the Google search algorithm? At this juncture it is too early to tell.
This one facet of search (the Google dance on a grand scale with a few new twists) may in fact be the case when search results plummet? Instead of looking at this as a result of loss of relevance, just think of it as a safety precaution of due diligence for Google to maintain the highest quality possible in their index.
If your site makes the grade and you can keep up with the this algorithmic heavyweight, then theoretically your results will return when the integration period stabilizes. According to the study, which is now in progress, one may be able to predict the churn rate of particular keywords and either insulate your site to be less susceptible.
However, just observing the phenomenon is enough to make any SEO nervous as you see your most competitive word take a hiatus for a powder break and go south in the rankings while it is being interrogated for quality.
Is this a survival of the fittest artificial intelligence like inclusion and exclusion feature of search engine algorithm is there to police relevance and is a constant work in progress. Just think of this as agents in the matrix going after NEO based on the Warner bros. movie the Matrix.
Add the fact that each datacenter has a unique snapshot of where you site ranks and how it could spider your site using a wide array of bots to test the structure of your content, links and authority (in an instant). If your site checks out, you are granted back in the search engine VIP club.
This elaborates why “the Google Dance” is just one of many facets that organic SEO specialists have to contend with. I can only speak from my own conclusions; however I have noticed this phenomenon on two occasions within a month and studying the outcome of which terms are affected by it.
This article falls into the category of theoretical SEO and search engine optimization myths, however if the conclusion sheds light ultimately on a particular aspect of the search algorithm, it could potentially provide more adept SEO‘s with the capability to study and share nuances in this regard on how to insulate your pages from the scramble. It is too early to tell, but it is always nice to share a glimpse of something gleaned quote by accident while conducting routine research in ones daily related tasks.
Another thing that appears to have occurred is the order of pages returned as a relevant search result has shifted (instead of page A now page B enters the rankings with a different relevance score). But this is another post in its entirety; stay tuned for more from SEO Design Solutions.