Duplicate Content and SEO

With SEO, sometimes you hear about duplicate content and duplicate content issues.  These “issues” are real and unless corrected, it is like splitting hairs and losing a large percentage of your potential SEO ranking factor within a website.

How To Avoid Duplicate Content

How To Avoid Duplicate Content

What are some of the types of duplicate content? And what can be done to manage, prevent or minimize their impact on rankings or site performance?

  • Canonical http:// or http://www variations of pages.
  • Shopping carts using hand me down descriptions from manufacturers.
  • Pages that use the same template with very little distinction from eachother,
  • Block level shingles (content copied from page to page or from other websites).

As you can determine from the brief list above, there are multiple species of duplicate content and each should be eliminated or managed from the level of your content management system, .htaccess file or through the uniqueness of content on each page.

Today we are only addressing the type created within your own website (as a result of content and indexing from lack of enough content) and ways to avoid or eliminate it.

We often emphasize the importance of uniqueness on a page by page level within a website. Having 100 pages and 800 of them having a 12% difference as a result of a different title, model#, photo and brief description doesn’t qualify as enough distinction significantly.

Common Duplicate Content Solutions

There are a couple of different things you can do to eliminate duplicate content, refine canonical issues or control which pages are indexed and which others are not.

1) Write enough content to shift the collective focus of the page.

2) Personalize data feeds or commonly used databases for makes and model#’s from mfg. for ecommerce products.

3) Add a custom field or segment on your ecommerce pages to create their own digital footprint.

4) Consider using meta-tags, or robots.txt to block or prevent indexing of extremely similar pages (which alleviates the penalty).

5) Use a canonical tag so that pages that are a linked like a daisy chain (or bread crumbs) all link back to the leader as the primary index page. This is very useful for deep categories where getting to the main category page is sufficient from an SEO perspective.

6) Personalize footer links to prevent diffusing the pages topic sitewide through “making each page similar”.

From the perspective of search engines, realize that if a page has nothing to contribute, if it already has 20 other pages from your website that have the same look, feel, code and context, you can’t expect them to index and waste space within their primary index on cookie-cutter pages.

To briefly address the points above:

For page to page level canonical issues use a 301 redirect by making adjustments to your .htaccess file (granted you are running a Linux server). More information on 301 redirects and how to implement them here from Taming the Beast.

Once fixed, depending on the type of site level duplicity, your link flow will consolidate within a site where you can then use internal linking to shore up segments of your website that may be depleted or starved for sufficient link flow.

Also, make sure if you have multiple versions of your homepage to consolidate them using the page level redirect method to one specific version http://www.yoursite.com/ versus http://www.yoursite.com/index.html or http://www.yoursite.com/default.htm |.aspx |.shtml | .cfm, etc.

Also, when linking to those pages (both within your own navigation and from other sites), make sure you use one consistent page / naming convention / URL.

Solutions for Multiple Pages with Similar Content

There are a few ways to remedy this. You can use a canonical tag to refer back to the original page or manage what gets indexed to avoid duplication.

If you have access to personalize the headers of the page, then you can implement a simple meta tag.

NOINDEX, FOLLOW – invites the spiders / user agents to crawl your page, but prevents that page from being indexed. Yet search engines will follow the links and include them for ranking factor or discover new pages based on the links present on the page.

This would be the preferred selection for ecommerce pages, if you wanted to emphasize the link back to the main category page and apply your SEO efforts to make the category your preferred landing page for rankings and traffic from natural search.

INDEX, NOFOLLOW – Tell search engine spiders / user agents to index the page, but not follow the links on the page to other pages or count them in their algorithm.

NOINDEX, NOFOLLOW – informs them, that you gave at the office and you would rather not be bothered. In other words, NOINDEX, NOFOLLOW means that the page will not be included in the search engine index and the links are not followed or treated with any significance.

The most common meta tag setting is INDEX, FOLLOW which is like leaving a plate of cookies out for Santa, it is the best way to ensure that permissions are granted and welcome them in to crawl, index, sort and earmark pages in their index for information retrieval. The same could be done at the server level with a robots.txt file as well. For more information on robots.txt follow the link.

If you have dynamically loading pages that insert tracking cookies, session ids, or other unique strings into the URL of your pages, then using the canonical tag is the best way to ensure your pages are not creating duplicity across your entire content management system.

A canonical tag is a tag that the three major search engines (Google, Yahoo and Bing) recently started to support that eliminates clutter in their index by not allowing multiple variations with slight differences to appear as more than one page.

By adding the canonical tag, it ensures that much like highlander, there can be only one, and the main page gets the credit, rankings and authority. For more information on canonical tags and how to implement them, follow the link.

Other Methods to Eliminate Duplicate Content

Adding unique content for pages that requires more emphasis, this includes rewriting your top selling products if you are using a shopping cart to avoid duplicate shingles from other sites selling the same products.

At first, you think, who has the time to rewrite hundreds or thousands of products? The answer, the companies that know the value of unique content, authority and what it means to rank heads and shoulders above the competition with a fraction of the backlinks or the SEO required to propel them there.

Every word you add to a document, it changes the landscape of that page, the more unique it is and the more frequently it references the primary keywords the page targets, the easier it is to separate that page from other pages in the template.

Navigation is also another area of concern for nested pages as they can cripple your near duplicate pages by convoluting the page with numerous (me too) links, like the other pages and completely diffuse the UNIQUENESS of the page.

You can tuck navigation behind an IFRAME, use of nofollow, or redirecting the navigation links from behind a /CGI-BIN/ script where all of the redirects happen in a page not followed by robots or blocked by robots.txt.

However, despite if they are followed or not, the presence of all of those links creates a unique page impression that for lack of better terms skews your s.k.u’s (stock kept units) by slightly obscuring their focus.

The advantage to the CGI-BIN method is, you could then insert includes or use contextual links (links in the body copy of the document) to connect relevant pages using keyword-rich anchor text to increase relevance and ranking position.

Footer links or breadcrumbs would then carry more weight in the algorithm since every link is not WIDE OPEN on your pages. Although that falls into the category of page sculpting, it is still important to understand the premise behind the topic.

Another Advanced SEO Tactic

1)      Create a custom text field somewhere on the page through the content management system.

2)      Populate that content through an XML feed or pull data from a blog. The blog could be set to noindex, follow and link to the new destination landing page and the landing page showcases the feed data as its own.

3)      Or for a low tech solution, simply write 250-400 words on the topic and insert that into the custom text field.

As you can see, where there is a will, there is a way. The blog method mentioned allows you to easily implement a CMS, keep the content from showing up in search engines (on the blog itself) as you can set it to archive or noindex, then the aggregated data, which you created can be utilized to rank on the page of your choice.

You could also tag the content, and aggregate it through additional means like Google Base and other RSS based feeds for additional leverage.

The real take away here is, if you want to distinguish your website in search engines, be ready to take an extra step and implement solutions that  support granular changes to url naming conventions, on page content, link structure and CMS architecture. All of these changes are secondary to thoughtful planning and can be eliminated entirely from building it properly in the first place.

However, since that is not always an option, having the ability to implement distinct modifications and functions to produce optimal ranking factor means creating custom programs, hacks and workarounds to adapt, modify and optimize your CMS, add content on the fly, change or modify header information, rewrite titles and more.

In the event that a redesign is not applicable, there are plugins that can help to distinguish your money pages through using some of the tactics we discussed above.

Also, did I mention it was free? If you use the WordPress platform, then we invite you to download our SEO Ultimate Plugin which has the ability to rewrite titles, meta data, tag and archive pages, adjust headers, track 404 errors, alter the robots.txt or .htaccess file, toggle which modules are active and numerous other SEO features.

As stated above, it’s better to build it right the first time, but just in case you can’t there are always solutions for common SEO problems.

Or Stick Around and Read More Posts

10 Comments

  1. dlgod
    Posted September 22, 2009 at 3:15 pm | Permalink

    very informative for me thanx for sharing…

  2. Jeffrey Smith
    Posted September 22, 2009 at 8:21 pm | Permalink

    Your very welcome, thanks for visiting…

  3. Fundoo
    Posted September 23, 2009 at 3:49 am | Permalink

    Definitely bookmarked! And thanks for sharing the article.very informative blog..wish u good luck

  4. John Vargo
    Posted September 23, 2009 at 10:48 am | Permalink

    Great article on duplicate content and how to improve search engine optimization. Thanks for the SEO tips!

  5. YoVz
    Posted September 27, 2009 at 9:39 pm | Permalink

    For of all, thank you for such an informative article. I am a newbie at all this. I’m in the process of creating a website for my dad. I understand how you described to clean up duplicate pages on the site. However, my dad has several domains because he has a multifaceted business. Here is the example:
    Imagine a grocery store. They have a domain “grocerystore”. They also have these domains “apples”, “steak”, “bread”. Essentially, the websites would be almost identical, except for the landing page of each domain which discusses the term they searched on. However, the other pages discuss everything else in the grocery store. Is this considered duplicity? if so, how do I get around this? Thank you.

  6. Derek
    Posted December 8, 2009 at 9:37 am | Permalink

    Hello. Great site full of useful information. I have a question and would greatly appreciate if you could help. Please note that I’m not very good with coding.

    I just added some footer pages to my site. I wanted the page names to be the same as the header page names. Should i “no follow” and “no index” these footer pages or just “no follow” them?

    I don’t want the search engines to punish me for having duplicate pages since both the header and footer pages carry the same name and link to the same page.

    Or should I just name the footer pages a little differently?

    Thanks for any help :)

  7. Mike
    Posted March 26, 2010 at 8:43 am | Permalink

    Very useful post, thank you very much, will introduce these techniques on my Sky Cards website

  8. sima
    Posted December 3, 2010 at 10:34 pm | Permalink

    Hello
    I need help please,am a student,i was asked to create a micro questionnaire website…my problem is how to make sure that a survey our enters his data once and how to secure annonymity.Need help please.tnx

  9. edyzen
    Posted March 7, 2011 at 12:46 am | Permalink

    morning sir, I have trouble with duplicate description each page. I use seo ultimate plugin. can you help me ? thanks

  10. 靴 推薦
    Posted November 2, 2013 at 3:38 am | Permalink

    whoah this weblog is great i really like studying your posts.

    Stay up the great work! You already know, many people are searching round for this information, you could help them greatly.

11 Trackbacks

  1. [...] This post was mentioned on Twitter by wardell latham and djabdou50. djabdou50 said: RT @nicheninja: Duplicate Content and SEO by SEO Design Solutions™ http://bit.ly/2vwQfr [...]

  2. [...] all honestly, duplicate content undervalues the proposition of why your content is important and imperative to rise above other [...]

  3. [...]  Make sure your website has selected a default index page as well as dealt with any canonical issues. For example, being able to find http and www versions of your site are not acceptable. Choose one [...]

  4. [...] can do this through managing which pages are indexed (such as using index, nofollow, index, follow or noindex, follow) then you are through the process of selection picking and choosing which pages will fly your [...]

  5. [...] a simple, noindex, follow command in the page header’s meta data is more than enough to indicate to spiders to pass this pages link [...]

  6. [...] are two reasons why this is important (1) shingle analysis and / or duplicate content and (2) creating a null set in various repetitive block segments (essentially search engines [...]

  7. By SEO Tips for Google Caffeine on March 4, 2010 at 10:51 am

    [...] Duplication: Aside from duplicate content resulting from content in the body area of websites, also templates that share common elements can [...]

  8. [...] (for now) to use as a barometer regarding the way Google crawls, indexes and interprets pages, duplicate content and the underlying structure of a [...]

  9. [...] (for now) to use as a barometer regarding the way Google crawls, indexes and interprets pages, duplicate content and the underlying structure of a [...]

  10. [...] promote over-indexation, load time for server platforms, debugging blocked site segments, battling duplicate content issues or patching up inconsistencies in dynamic code due to performance based [...]

  11. By WordPress SEO Tips to Avoid Search Engine Penalties on February 20, 2013 at 12:50 am

    [...] your an avid SEO and understand HTML then you could simply implement a meta-robots command such as nofollow or noindex, but unless you want to do that painstakingly page by page, you’ll need a plugin such as SEO [...]

Post a Comment

Your email is never shared. Required fields are marked *

*
*

Web Analytics