This project scrapes all available domains from freedns.afraid.org that can be used for free subdomain registration.
- Scrapes all ~25,000 public domains from the registry
- Handles pagination
- Extracts domain, status, owner, age, and hosts in use
- Outputs to markdown tables: alphabetical and length-sorted
- Automated via GitHub Actions every 12 hours
- Python 3.9+
- Dependencies listed in
requirements.txt
- Clone or download this repository
- Install dependencies:
pip install -r requirements.txt
Run the scraper:
python scraper.pyRun the scraper with n amount of pages (each page contains approx 100 domains):
python scraper.py -p 10The script will:
- Scrape all pages of the domain registry
- Extract domain information from the HTML tables
- Save to
domains-alphabetical.mdanddomains-length.md
The project runs a GitHub Action automatically every 24 hours to update the list of domains.
To set it up:
- Push this code to a GitHub repository
- Ensure GitHub Actions is enabled
- The workflow will run on schedule and commit updates to the markdown files
You can also trigger manually via the Actions tab.
domains-alphabetical.md: Domains sorted alphabeticallydomains-length.md: Domains sorted by length (shortest first), then alphabetically (ab.cd, then ac.cd)