This guide is for those who would like to automate most of the process that comes with finding expired domains. The advantage to using a method like this is most of the drop lists and expired auctions will never pick these domains up. Essentially they slipped through the cracks and nobody noticed.
To briefly sum up the steps taken in this guide:
- Scrape external links from high PR websites
- Filter and save domains that are available from those links
- Analyze those domains using a hands on approach
- Buy the domains that pass our requirements
I just want to say that I have never wrote a guide like this before so if you find some glaring mistakes please contact me.
Before we begin
First off the only tool that’s required for this is Scrapebox. But if you go with Scrapebox alone you’ll spend a ridiculous amount of time waiting and you won’t be able to properly analyze the domains later on in this guide.
Anyways here are the tools I used, all of which I already had. If you don’t have some of them you should be able to replace those with an alternative of your choice.
- Scrapebox Link Extractor
- Scrapebox Alive Check
- Scrapebox’s Premium Automator Plugin
- Custom TLD Reformatter
- Squid Proxies
- Proxy Multiply
- Mozscape API
- Open Site Explorer
You will have to download my Expired Domains zip folder which includes the folder and file structure used for this guide.
Here is the download link Expired Domain Zip.
Note that it includes a program called URL Shortner. This program is executed by the automator. What is does is help trim URLs to its proper root domain. I use this as scrapeboxs trim to root doesn’t always understand the structure of a domain properly. Also without it you will most likely end up with a ridiculous amount of broken domain links that you will have to manually fix or remove before pasting into a bulk domain checker, as they will error out if not formatted correctly.
My configuration for this guide revolves around the file structure used in the file you just downloaded. Ensure that you unzip it in the root of your C drive. If you plan on using a different file structure you will have to reconfigure the automator plugin with your new paths. For the rest of this guide I am going to assume that you have used the same file structure as me.
I used two different types of proxies in this guide both private and public.
Squid Proxies (Private)
Squid Proxies was used for scraping the URLs from Google, Bing, & Yahoo. I did this with only 10 dedicated proxies and it worked fine. Edit the file below with the private proxies of your choice.
- C:\SEO\Squid Proxies\squid.txt
Proxy Multiply (Public)
For public proxies I used Proxy Multiply. Why? Because we don’t want to burn our private proxies and it’s much easier to use a large list of disposal proxies. Edit the file below with the public proxies of your choice.
- C:\SEO\Proxy Multiply\proxies.txt
Now let’s open up Scrapebox. Ensure that you have set the your Pagerank connections to +10 or you’ll be waiting a while for the automator to complete (Though if you don’t have many proxies you’ll have to lower it). Follow the steps below.
- Select Settings Tab at the top
- Select Adjust Maximum Connections
- Followed by changing the Pagerank connection limit
Ensure that you have the Link Extractor and Alive Checker addon installed or the next step won’t work.
This is the heart of the entire process, Scrapebox’s Premium Automator Plugin. First we are going to edit the keyword file which can be found here:
- C:\SEO\Expired Domains\keywords.txt
In this example I used only 2 keywords to keep it quick. Remember that the automator setup scrapes a ton of external links from sites it picks up.
Now lets just take a quick look at the automator configuration so you get an idea of what it actually looks like and what steps it performs. There isn’t much explaining required in this process, though I am sure I’ll get messages regarding the footprint I used. -2013 -2012 -2011 was used to only scrape websites which had at least one of those numbers on it as we are looking for older websites. You could easily go back down to 2008 or 2007 if you wish. I just have had more success using more recent years.
Also you can load scrapebox empty or full if you wish the automator takes care of all of the fields by clearing and importing the required information.
Now we’ll run the automator plugin by selecting:
- The Automator Tab
- Followed by the Run Automator File
- Browse to C:\SEO\Expired Domains
- Select ExpiredDomains.sbj to run
Now this might take anywhere from 10 minutes to 10 hours depending on the amount of links that need to be both harvested and checked against page rank. If you want, increase the page rank/harvester connections used. Also remember the more keywords you use the more links you’ll have to go through.
By now the automated has finished running, though we have a few more things we need to do to shorten the list of URLs.
The final file exported from the automator is C:\SEO\Expired Domains\ScrapeBox\final.txt
Unfortunately the automators functionality for the Alive Check doesn’t work properly so we’ll have to do this part by hand. Load the Alive Check addon by following these steps.
- Select the Addons tab
- Select Scrapebox Alive Check
- Select Load urls
- C:\SEO\Expired Domains\ScrapeBox\final.txt
- Options (Sucess Codes used to reduce live websites found)
- Now Start
Once it is done running complete the following:
- Select Save / Transfer
- Select Save Dead to your Harddrive
- C:\SEO\Expired Domains\complete.txt
We are now done with Scrapebox, lets move on!
You can use any notepad editor or even just excel but we need to remove the http:// from the complete.txt file you just exported. As most bulk domain checkers will apply the http to the actual domain name which will not be the domain you are checking availability for. e.g., http://bizatomic.org to httpbizatomic.org
Press ctrl+f to prompt the find utility. Select the Replace tab and fill in the Find what text area with http:// and the second text area with nothing. Now press Replace All.
You will end up with a list similar to this.
Lets move on to checking the availability of the domains we now have.
Bulk Domain Availability Checker
We now need to see which domains are available for registration before we analyze them. This obviously shortens the list of domains you have to go through. Also I just want to say that the automator does try to lower the amount of already registered domains but some a good amount of them still get through.
Here you can load up to 500 domains at a time to check for availability through GoDaddys Bulk Domain Search. Now paste in the domains you have just saved from the complete.txt file. Followed by pressing GO.
You can see here that out of the 360 domains 151 of them are available for registration. Now save all of the available domains into a text editor of your choice as we’ll need them next.
SEOGadget (Excel Addon)
First download the SEOGadget addon as we’ll need it to compare various metrics against the domains we have saved (You will need Excel to run this). The purpose of this tool is to mass check domain metrics as doing it through the website is limited to 5 domains per requests. With this method we can do up to 200 in a single request.
Follow the installation instructions given for SEOGadget before preceding.
Now inside the the zip file you have downloaded from me you will find an Excel file called MOZ API.xlsx. Open it up.
This file contains functionality from SEOGadget which uses Mozscape API (If you don’t have a pro account with MOZ I believe you are limited to a very low amount of API requests per minute).
Now paste up to 200 domains within the A3:A203 fields to be analyzed. Double click OK (The A1 field) to have the tool start its analysis. This may take 30 seconds or so.
Now sort and filter all of the columns to your liking. In this case I decided to sort by Domain Authority as it is a fairly good metric. You can do that by following these steps:
- Scroll to the far most right of the table and select the pda drop down menu.
- Select Sort Largest to Smallest
Now we have some domains we can analyze by hand, lets move on!
Ahrefs/Open Site Explorer/Archive Analysis
Finally the fun part, hands on analysis! You may or may not have an MOZ or Ahrefs subscription, I don’t know. If you don’t have access to either this is going to limit the amount of sites you can analyze properly.
So what do we look for when buying expired domains? The first thing I do is ask myself…
- Has it been spammed to death?
- Does it look like another SEOer previously owned it?
- Was it a PPP (Pills, porn and poker) type site?
To continue our example I picked a site simply based off of its name from the excel sheet. So we’ll take a look at the website yorkyouth.ca.
Starting with Ahrefs, I’ll review the following for the website I selected:
- Backlinks going to the top pages
- Less then 50% anchor cloud.
- Unnatural links
- Foreign links
- Blatant spam
As you can see below it doesn’t have a ton of backlinks, so lets dig a bit deeper.
Taking a look at the anchors cloud it looks as if this website was indeed a legit website at one point in time.
Also after checking out some of the top backlinks coming into this domain you can see that it has some high authority websites pointing back to it. Including Charity Village, Toronto Public Library and New Market (Be sure to check that these backlinks do in fact exist, there is a good chance the +200 day ones don’t anymore).
So from Ahrefs alone this website looks like a nice pick up, but lets take a look at Open Site Explorer as well.
Open Site Explorer
With Open Site Explorer I’ll just take a quick glance at:
- The Backlinks as they may be a bit different form Ahrefs.
- The Compare Link Metrics stats.
Here are a few of the backlinks, they are different from Ahrefs so I would also check a few of them out just to be sure they aren’t spam.
If you select the Compare Link Metrics at the top of the screen you can get a quick look at the MozRank and MozTrust. In this case both of them look fine to me.
Archive (Internet Archive Wayback Machine)
Last thing I do is look at the Wayback Machine. To see how the websites used to look. I will normally view the recent caches as well as a few of the older ones. When you go through these caches and see someting that looks like spam, it probably is.
Make your way over to the Wayback Machine and just paste in your domain followed by pressing browse history. I selected the most recent one which was August 28, 2012.
From the image below, I think it is safe to say this website was never owned by a spammer. It looks as if they decided to change their domain name, hence the new URL in the Wayback Machine bar. If I was looking for a local website, in this case with a Canadian extension I would pick up this site.
Buying the domain
One important point in buying domains that I haven’t yet spoken about about country code TLDs like .ca or .au. In order for you to pick up these domains you need to register with valid information related to the country code (Though I am sure you can find ways around this if you really want to pick up that domain).
So you now have some domains that you feel comfortable with buying? Great. You can buy domains from any where but I recommend GoDaddy or Namecheap as they have great discounts on most of the time (Try searching for coupons as well. I can normally knock off 20-30% of the purchase total).
I hope this guide has helped you find some usable expired domains. I know that it is a pretty complicated process that could use both fewer, as well as alternative tools.
Like I said, you can get away with with almost every tool other than Scrapebox. These other tools just help save time and ensure you get get a decent domain. If you have any questions or suggestions regarding this guide please post below!