In today’s competitive SEO landscape between multimillion dollar content wars, automation and improved search engine filters, how does your website stand up to the competition?
In case you haven’t noticed, Google’s pre-caffeinated algorithm has become less trusting when it comes to new websites entering the online space and occupying a prime position. Citation and site or page reputation are now more important than ever when considering an organic search engine optimization campaign.
This directly translates to who you are, who you link to, who links to you and how much trust those sources have acquired within the referential integrity of Google’s known nodes or sites that are deemed significant as trusted sources.
You may ask yourself, why is that? But the answer is due to the commodity of the top 10 results and the value of the traffic flowing through semantic clusters of various keywords. As the cost of PPC grows and keywords are more difficult to target organically, automation tends to rears its ugly head.
Competition is the heart of the equation is responsible for elevating massive content wars from companies or individuals attempting to sway the search algorithm by flooding spiders articles, blog posts, RSS feeds, spun (re-purposed) content and affiliate “fake believe” reviews.
Considering, there is only a slight degree of distinction between a viral hot topic rising to the top of search results naturally and an injected and intelligent / randomized organic looking spike … “the footprint”.
One thing to keep in mind, everything leaves a trail and with the exception of the link neighborhood and the magnitude of the ripple – Google does a fair job at sorting the relevance to noise ratios adequately within their algorithm.
So, how is a search engine going to read, ingest and dissect all of this information without accidentally lumping some signals of reputable, trusted sources from automated blog and ping scrapers all attempting to inflate authority to build a bigger funnel.
Keep in mind as Google adapts, so do others with their scripts, tools and tactics to cloak the trail, obfuscate data (IP, Shingles, Etc.) to minimize the footprint and create a tactical advantage through content post frequency (more content, more pages, more rankings) and link citation and backlink acquisition.
For those unfamiliar with the context of the sandbox ; ”whom some believe does not exist” is that it is like a holding tank for observation which places certain websites or pages in quarantine until relevant signals vouch for them.
One thing I have noticed recently is (1) de-indexation of pages (like a hypersensitivity on page duplicate content) as new filters are integrated (2) extremely fast indexation time for static / trusted pages “almost like a live update / loop within the repository” and (3) trusted pages and sites rising in the search engine result pages faster than newer pages (which seem to undergo a more stringent search engine filter for authenticity).
Obviously the variances are unique to each industry and the signals or levels of competition for the crowning phrases that represent the most money, but nonetheless the adjustment has left many websites reeling as a result of demotion in the search engine result pages.
This normalization is based on ranking algorithms which weight the value of pages, links, citations and impact, so, aside from an elite group of engineers working in secrecy at the Googleplex, all SEO’s have to go on is the trail these adjustments leaves behind.
Just as you can determine something’s course by its nature and determine something’s nature by its course, the reason why is clear – to minimize the noise and to produce the most relevant search results. However, has the more stringent filtering processes left a significant portion of websites exposed to the same treatment by lumping the good with the bad (as popularity and citation are a double edged sword) and sweeping many sites, pages and keywords under the rug.
The content sandbox exists; however, it is just a matter of certain websites, with certain characteristics or is it just a new holding pattern that SEO’s will have to adapt and overcome. While machine learning calculates data, relevance and importance based on signals and patterns, how can it possibly interpret a new pattern without comparing it to an older one; and if so, will it always make the appropriate choice for which pages in the index are considered relevant and useful on its own (with some type of human oversight).
As the web continues to expand, new signals will continue to present themselves. Google is not a seer, however, they are a revolutionary technology company that also learns and adapts. It may be too soon to see or say what the future holds, but the content wars are real and new filters are making their debut.
So, keep in mind, if you are not developing authority and trust (in addition to links and pages) it may not pass the litmus test when its time to sort, sift and assign rankings based on SERP scoring algorithms.