Who said rankings always make sense. Sometimes the SEO signals, relevance and tip of the iceberg analysis just doesn’t add up.
No matter how many times you have seen or used optimization tactics before, there is always a chance (as algorithms are like living breathing organisms) that something will change or go below the surface where it is nearly impossible to attribute to a correlating group of causes and effects.
This convergence is where the cusp of information retrieval meets the wild west online where anything goes and ruthless competition thrives. Sure, anyone can go out there and build links like a bat out of hell and create some temporal rankings for a new website, but what about those aged authority sites that simmered for years and rank on their own latent and pent up “ranking credit” as a result of the trust and exuberance of their content.
Those are the genuine gatekeepers of the web as relevance thresholds coupled with time-sensitive queries are calibrated for relevance based on each users keywords and key phrases, history of the websites they frequent (personalization) and hundreds of other metrics are served-up on the fly to put each page in its respective place.
How is it that the aloof metric of TrustRank or search engine trust can undermine any of the various assumptions SEO practitioner’s often lean towards based on their own limited heuristic assumptions and exposure.
For example assumptions like:
- a website need a large amount of backlinks to rank highly for competitive keywords
- website needs allot of pages to rank highly in search engines
- you need to theme internal links within your website
- you need relevant titles in order to rank well in search engines
First let me qualify that I have seen each and every one of these tactics work, but I have also personally witnessed contrary examples of each and every one of these boilerplate metrics where sheer anomalies exist that defy in-depth analysis.
I have seen sites with 5 pages rank for keywords with hundreds of millions of competing pages, sites with no visible on page SEO outranking a streamlined, well optimized clear champion page.
But why?, Have you ever considered the fact that the algorithm is a rolling work in progress and the rules which govern each site could be tied to an anchor point and insulted from newer more volatile algorithms.
We refer to this as the grandfather effect and it stands to reason that some type of algorithmic insulation is occurring when you see visible anomalies that defy causation as identified from the chain of effects it leaves in its wake. As your pages age, they to take on this characteristic if they are anchored properly within 3 hops from the root folder and receive a significant amount of link equity from other strong pages.
However, you have to ask yourself, is there a null point to links, when they switch from link-flow to trust? If so, what is it?
- Does link reputation offset link equity within a site, i.e. that one solid .edu or .gov do-follow link from a themed site to a relevant page that ranks higher than another internally leveraged page with dozens of internal and deep links?
- Does that one internal link from a page that got the right combination of on page and off page factors outweigh the sum of all the other internal links that were built in the last year?
- Is indexation and PageRank always going to be a consideration for search engine result page buoyancy? Why do stale pages rank just fine for years without so much as an update or a link, while others cannot overcome them with fresh links, a high crawl rate and deep links?
The answer is, nobody knows – and the metrics are constantly in a state of flux. Since you cannot isolate any of the 200+ metrics without affecting some portion of another metric, it is difficult to attribute unique effects with corresponding causes (since you can never tell what is happening outside of your website or in the competitive space).
SEO is a work in progress – and sometimes you really have to dig deep to unravel a competitors layers of optimization or implement your own to offset their relevance score to arrive at Google’s clearly defined seed set of documents it identifies as the top 10 candidates with the qualified least imperfect score.
While all websites exhibiting domain authority, they may have the same attributes via the common denominator of:
- unique, compelling content backed by on page co-occurrence (the presence of keywords in close proximity and prominence).
- a primary topic which creates a topical integrity of internal peer review (through linking strategically with keyword-rich or semantically aligned keywords). or
- peer review (inbound links or citations) from sites based on a similar or compatible theme.
However, time is the main ingredient that allows a website to rise in due form. Search engines often utilize a commensurate algorithmic measurement to insulate a page with trust and buoyancy if other valid metrics are in place.
This search engine trust becomes obvious when coupled with an aggressive content development strategy such as a mammoth online presence such as Amazon.com with its ability to showcase over 100,000,000 pages indexed and qualifying as valid and unique according to Google’s algorithm (ranking for virtually all manner of product).
If you were to try dumping 1,000,000 pages into a new site in a few days or few weeks for that matter, regardless of how unique they were – they would not be met with the same ranking signals.
There is a unique correlation to citation, expression, relevance and progression that needs to occur for each website to reach the fulcrum whereby the tipping point (and not normalized back lash) leans in their favor.
Finding it and staying afloat regardless of growing pains, selective indexation to ensure only the content that has the appropriate signal is allowed into the index is imperative to reduce the potential for keyword cannibalization (keyword overkill) and maintaining a higher degree of diversity and relevance score to appease search engine spiders and humans alike.