What may have seemed like a blatant attempt to pull a PR stunt on Starbucks with a code name “caffeine” inside the GooglePlex, is now poised to stir up the search engine result pages and the web as we know it.
Many natural search/organic SEO firms and online moguls could potentially have the daunting task of diagnosing unprecedented dips in organic rankings that could change the horizon for rankings and revenues. Although you should never be too heavily vested on one conversion path or marketing medium alone, the sad truth for many businesses is that organic traffic is one of the most cost-effective forms of conversion.
It is possible that a site that previously held multiple coveted top 10 positions for years could slip considerably and require an entire revamping of links, off page factors or tweaks to “dial-in results” to keep in good standing with new caffeinated algorithm.
Is this an SEO Showdown or Simple Evolution?
It is no secret that automation (the gray areas of SEO) leave a unique footprint that search engines could potentially use to divulge authenticity. For example, a website mysteriously getting 10,000 new links in one month from a group of related IPs that only had 57 visitors for example from the month prior is anything but natural.
Similarly, many of the other tactics that aged sites may have indirectly benefited from incidentally, such as an initial crawl for content (supported by primary navigation) and infrequent deep-crawls to the thousands of sub pages.
Before, with significant differences ignored for weeks on end, the SERPs remained buoyant for longer periods of time. Seeing search results “come out of nowhere”, hanging out for 4-6 days in a top 10 spot completely blind siding the previous SERP holder is already something we have become familiar with.
For all we know, this migration into the new platform might already be complete by the time you read this post. The fact is, with the current churn-rate of content consumption and rich media, there are no signs of the rate of new content slowing down anytime soon.
It is logical to suspect that the “new caffeinated filters” will identify patterns of automation and parse and purge the existing index (like a process of refinement) by observing timestamps, IP ranges, semantic anchor text clusters and other unique signatures. Or on the contrary, will someone simply flip a switch and voila a new Google is unleashed?
As exciting as the news is, this could spell disaster for some businesses that thrive as a result of their organic positioning. Most of the SEO process is having a solid point in which to anchor and assess other related points (to find your bearings).
Without knowing which metrics to assess first, countless hours could be burned up in an attempt to identify signals that produce consistent reactions or results. I wouldn’t necessarily call it a set back as much as a challenge for those who were resting on their laurels.
For those sites that have been toiling away creating unique content consistently, applying relevant internal links and applying best practices, I suspect you will start to see significant increases in the range and volume of monthly referrers from Google as a result of the new crawler technology. Essentially, you have nothing to fear, it is the less diligent gray areas that are most likely the target and reason behind this revision in an attempt to reclaim the web from excessive amounts of spam.
Cloud Computing at it’s Finest
While staggering crawl rates and modifications to naming conventions, modifying titles, links or descriptions with the old Google Decaffeinated results across a CMS (content management system) template may have worked in the past, who knows how the new engine under the hood of Google’s revamped search engine will make a difference or treat these types of SEO behavior until it at least gets around the block a few times.
Eager SEO’s have already started compiling data on differences in SERP positioning between the old and the new (caffeinated to non-caffeinated) engine. Although it seems like a noble idea, at this phase of data collection and refinement, it is a near-worthless pursuit unless Google simply stops working on the algorithm and leaves it intact indefinitely (which is highly unlikely).
Unless you have the means to measure vast inconsistencies based on the ghost memory of “the web as it was” when it was last crawled and “the web as it is now”, studying results “after the fact” leaves too much to chance.
It’s better not to jump the gun prematurely, but with the turbo-charged information highway revving out content at breakneck speeds, who’s to say that there is anything wrong with a little caffeine to stir things up a bit?