In case you missed @mattcutt’s tweet last friday, Google is turning to the interwebz for some help on the latest tweaks to their search algorithm. Specifically, they want to bring down the hammer on scraper sites, especially blog scrapers. Matt Cutts is the head of the webspam team at Google.
What’s a ’scraper site?” It’s a site that copies others’ content and publishes it as their own. The problem is even more infuriating for content creators like us when those crappy sites with our stolen content rank higher in Google search results than ours. Grrrrr!
This is where we come in. If you notice that your content is being scraped, fill out this Google Doc form asking for 1) the exact query that yielding the scraping problem (where the scraper site outranked the original site), 2) URL of site with the original content and, 3) URL of scraper site. The form says that they may use these examples to improve their algorithm.
Maintaining the quality of search results requires that Google stay on top of tactics that are constantly being developed to game the system in order to get their sites premium placement in search results.
You’ll hear SEO specialists refer to doorway pages, hidden text, cloaking, interlinking, keyword spamming/stuffing and scraping. These are referred to as “black-hat SEO tactics” and are the fastest way to get banned from Google.
I would love to say that as long as you walk the line and are providing great content, you don’t have to worry about our friends at Google. However, the reality is that constant changes to the search formula can also be frustrating for legitimate web publishers who provide high quality content. After every update, Google is bombarded with criticism and/or complaints about being inadvertently penalized or “Google slapped.”
Yet another reason to diversify your sources of traffic (more on that subject to come).