Before you can truly lean into an SEO campaign, there are a few mandatory prerequisites for any website to scale the SERPs (search engine result pages).
The most important criteria initially is getting in the index (via bots a.k.a. spiders) crawling your pages. Once in the index, search engines have a component they use to store updates and changes to track your content over time and how it impacts the top 1000 pages cataloged in reference to the search phrases that are aligned with (1) the documents focal point and on page structure as well as (2) the extent of the pages off page range as a result of link reputation and relevance.
I had addressed this in another post how Google has a repository of data (nicknamed the bin) gleaned from the inception that a page enters their index to its current embodiment. This allows them to assess changes over time and extract the gist of (a) how that page has evolved (b) if it has gained off page reputation to validate the on page claims and (c) serves to function as a blueprint for the most likely and least likely aligned ranking factors possible for that page. You can see a public version from another project at archive.org. Although theirs is done on a less frequent time frame, you can gain the perspective of how evolution can impact your websites ranking factor on a page by page basis.
To borrow the description used for the bin from another SEO post of ours and yet another reference to the bin from the past…
Google has a memory of every revision you have every made to your site, so every page title and every description you every wrote for your pages is stored in a profile (known as the bin) for your site, when someone types a query, it checks the bin for an exact or partial match and then returns the most appropriate response from any number of websites and respective pages within those sites (you can see this in the description of the search results, they change based upon your query). So, in essence, the more you update your pages strategically, the more frequently Google returns and catalogs your content, the more keywords you can amass for your pages.
Similarly, if improving the ranking factors for your page is the objective, then realize that – “the spiders in turn compare recent changes with the last cached snapshot of your pages and take note of revisions and integrate that into the bin (a storage cloud of your sites history spread across different data centers).
Based on how important the concept and the continuity behind the document is, if the new changes a favored by search engine algorithms, then a raise in relevance and relevance score occurs for the target terms on that page and for the entire site as a result of this synergy”.
So, what this tells us is (1) a page is never completely finished, unless you have exceeded the number of criteria / weighting mechanisms needed to rank in the top results for a specific keyword and (2) every update is recorded and affects the relevance score for that page (which is a moving target). If you can manage to acquire relevance at a rate that exceeds your competitors, then you ensure your position in the SERPs for that phrase.
For example, if you are one of three people being chased by a bear in the woods, you don’t neccesarily have to outrun the bear, just the other two people to ensure your safety. This is also the case for relevance and relevance score, your biggest competitor is your own website and scuplting the neccesary on page and off page factors and the degree of how quickly those changes are spidered, indexed and impacting your website.
So, going back to basics means, things such as post frequency, relevant content using a topical theme and revisions and refinement over time are key ingredients in securing a competitive keyword or key phrase combination.
For example, in the old days of SEO, 4-5 years ago, people resorted to stuffing meta tags, alt attributes on images, abusing footer links and the body text with all types of spammy keywords in order to increase the relevance for that page. This was before search engines evolved and were hip to discover these less than honorable tactics. Now, someone still attempting to use such dated techniques would find their pages incapable of ranking their way out of a paper bag in essence.
Not only have search engine algorithms evolved, but consumers have become more sensitive to what they read and quite frankly a spam laden page full of stuffed keywords is not going to have the same impact on human readers (any more than impress the often abused search engines spiders). The value of ensuring your website is free of spider traps or allows crawlers in to index your content is more important than ever.
Often we have a client who is ranking like a Rockstar on one search engine and is not even indexed in the other top 3 search engines. Each search engine and source of referral traffic brings its own metrics to bear, as well as its own criteria of what it considers a primary ranking factor. Regardless of what those values are, they won’t matter unless a search engine can crawl your pages and index them to begin with. So, before you put the cart before the horse, maybe its time to go back to basics and look at the primary site navigation and assess how nested your site architecture is and if the spiders need a little help to ingest it.
You may need assistance if:
- You have a large site and do not link much from page to page with contextual anchor/link text.
- Your navigation only encompasses a small range of pages and leaves other pages starving for link weight.
- Each page lacks the proper internal link graph.
Internal links provide you with the ability to tell your side of the story. You are essentially helping search engines do their job by (a) linking to relevant pages with relevant anchor text (b) delineating a clear structure for topical content (theming and siloing) or nesting content in relevant sub folders. From there, when the off page link factors come into play and the number of links to the homepage and deep links to sub pages blend, determine how effectively your pages fare against other websites targeting those terms.








Enjoyed that post thanks.
Do you have any empirical evidence of the “bin” concept causing ranking? For example, ranking for a keyword currently not contained on a page?
I don’t doubt the logic, just curious as to the extent it has been tested.
Great stuff.
Steve
Hey Steve:
Outside of my own experiments I cannot quote anything. As I am probably referring to this process with the inappropriate descriptors. In any case, I can show you several pages I personally optimized (for specific terms) that no longer exist, yet the rankings hold fast.
Take the term SEO visibility for example.
Long ago I aggressively optimized this keyword (as a trophy keyword) and now this term even has a mini-one box *(sample of sitewide links) for the relevance it still carries, yet I have removed all references to that term from the page over a year ago.
I also have many other phrases that “rank on fumes” with legacy or “ghost rankings” as I call them. I should however hold something back from a perspective of competitive advantage (I give away allot of SEO stuff in this blog for free).
Indeed you do, but that’s what makes your blog interesting and better than the majority of SEO hyperbole published. Keep it up, theres a gap in the market for quality insightfull writing now that seotheory.com has stopped.
Thanks :)
Good, thanks for the information.
Yes, the internal link framework for a website counts a lot in the ranking of the web on the search engine . So, a careful research study should be done at the time of building the website internal page linking architecture .