Domain Expiration Monitoring — Automated Alerts with GitHub Actions

March 1, 2026

Your registrar sends renewal emails. That works until you have domains across multiple registrars, the billing contact leaves the company, or you need to watch domains you don't own — competitor domains, brand squatting targets, phishing lookalikes, client domains you manage but didn't register.

If you only have a few .com domains, you can query RDAP directly — no API needed:

curl -s https://rdap.verisign.com/com/v1/domain/google.com \
  | jq '.events[] | select(.eventAction=="expiration") | .eventDate'

That works, but each TLD has a different RDAP server, date formats vary, and some registries rate-limit aggressively. Once you're checking domains across multiple TLDs, the plumbing adds up. The RDAP API handles server discovery, normalization, and rate limits — one endpoint, consistent JSON, 1,200+ TLDs.

This guide uses the API to set up a weekly check with Slack alerts.

What you need

  • A GitHub repository (can be private, can be empty — just for running the workflow)
  • An RDAP API key — free 7-day trial, then $9/mo for 30,000 lookups
  • A Slack webhook URL (optional — GitHub emails you on failure by default)

The domain list

Add domains.txt to your repo — one domain per line:

# Our domains
mycompany.com
mycompany.io
myproduct.dev
mybrand.org

# Watch list
competitor.com

The monitoring script

# monitor.py
import os
import sys
import json
from datetime import datetime, timezone
from urllib.error import HTTPError
from urllib.request import Request, urlopen

API_KEY = os.environ["RDAP_API_KEY"]
WARN_DAYS = int(os.environ.get("WARN_DAYS", "60"))

def check_domain(domain):
    req = Request(
        f"https://rdapapi.io/api/v1/domain/{domain}",
        headers={"Authorization": f"Bearer {API_KEY}"},
    )
    resp = urlopen(req, timeout=30)
    return json.loads(resp.read())

now = datetime.now(timezone.utc)
alerts = []

with open("domains.txt") as f:
    domains = [line.strip() for line in f if line.strip() and not line.startswith("#")]

for domain in domains:
    try:
        data = check_domain(domain)
        expires = data.get("dates", {}).get("expires")
        if not expires:
            alerts.append(f"⚠️ {domain}: no expiration date available")
            continue

        exp_date = datetime.fromisoformat(expires)
        days_left = (exp_date - now).days

        if days_left < 0:
            alerts.append(f"🔴 {domain}: EXPIRED {abs(days_left)} days ago")
        elif days_left <= 30:
            alerts.append(f"🔴 {domain}: expires in {days_left} days — renew now!")
        elif days_left <= WARN_DAYS:
            alerts.append(f"🟡 {domain}: expires in {days_left} days")
    except HTTPError as e:
        alerts.append(f"⚠️ {domain}: API error {e.code}")
    except Exception as e:
        alerts.append(f"⚠️ {domain}: lookup failed ({e})")

if not alerts:
    print(f"All {len(domains)} domains OK (>{WARN_DAYS} days remaining)")
    sys.exit(0)

print(f"Domain Expiration Alert — {len(alerts)} domain(s) need attention:\n")
for alert in alerts:
    print(alert)

sys.exit(1)

The script checks each domain, compares the expiry date to today, and exits with code 1 if anything needs attention. No Slack logic, no notification plumbing — that's the workflow's job.

The GitHub Actions workflow

# .github/workflows/domain-monitor.yml
name: Domain Expiration Monitor

on:
  schedule:
    - cron: "0 9 * * 1" # every Monday at 9 AM UTC
  workflow_dispatch: # manual trigger for testing

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6

      - uses: actions/setup-python@v6
        with:
          python-version: "3.12"

      - name: Check domain expirations
        run: python monitor.py
        env:
          RDAP_API_KEY: ${{ secrets.RDAP_API_KEY }}
          WARN_DAYS: "60"

      - name: Notify Slack on failure
        if: failure()
        uses: slackapi/slack-github-action@v2
        with:
          webhook: ${{ secrets.SLACK_WEBHOOK }}
          webhook-type: incoming-webhook
          payload: |
            {"text": "⚠️ Domain expiration alert — check the workflow run: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}

Add two secrets in Settings > Secrets and variables > Actions:

  • RDAP_API_KEY — from rdapapi.io/dashboard
  • SLACK_WEBHOOK — your Slack incoming webhook URL (skip this if email is enough — GitHub already emails you when a workflow fails)

Push the three files, go to the Actions tab, and trigger the workflow manually to test. If everything is healthy:

All 5 domains OK (>60 days remaining)

If something needs attention, the workflow fails and you get a Slack ping:

Domain Expiration Alert — 2 domain(s) need attention:

🔴 mybrand.org: expires in 12 days — renew now!
🟡 competitor.com: expires in 45 days

One thing to know: GitHub disables scheduled workflows on repos with no commits for 60 days. If your domain list rarely changes, either push a commit periodically or switch the cron to run on a repo that already has regular activity.

Scaling to 100+ domains

For larger portfolios, the bulk endpoint checks 10 domains per request. Replace the single-domain loop with:

def bulk_check(batch):
    req = Request(
        "https://rdapapi.io/api/v1/domains/bulk",
        data=json.dumps({"domains": batch}).encode(),
        headers={
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json",
        },
        method="POST",
    )
    with urlopen(req, timeout=30) as resp:
        return json.loads(resp.read())

for i in range(0, len(domains), 10):
    result = bulk_check(domains[i:i + 10])
    for r in result["results"]:
        if r["status"] == "success":
            expires = r["data"].get("dates", {}).get("expires")
            # ... same expiry check as before

100 domains = 10 API calls instead of 100. The single-domain script above works fine for 100 domains on a Starter plan (100 lookups/week = ~400/month out of 30,000) — bulk just reduces HTTP overhead. Bulk requires a Pro plan ($49/mo, 200k lookups).

Using Node.js instead

If your project is already Node-based and you'd rather not mix runtimes, here's the equivalent (no dependencies, uses built-in fetch):

// monitor.mjs — run with: node monitor.mjs
import { readFileSync } from "fs";

const API_KEY = process.env.RDAP_API_KEY;
const WARN_DAYS = parseInt(process.env.WARN_DAYS || "60");
const domains = readFileSync("domains.txt", "utf-8").split("\n").map(l => l.trim()).filter(l => l && !l.startsWith("#"));

const now = new Date();
const alerts = [];

for (const domain of domains) {
  try {
    const resp = await fetch(`https://rdapapi.io/api/v1/domain/${domain}`, {
      headers: { Authorization: `Bearer ${API_KEY}` },
      signal: AbortSignal.timeout(30_000),
    });
    if (!resp.ok) { alerts.push(`⚠️ ${domain}: API error ${resp.status}`); continue; }

    const expires = (await resp.json())?.dates?.expires;
    if (!expires) { alerts.push(`⚠️ ${domain}: no expiration date`); continue; }

    const daysLeft = Math.floor((new Date(expires) - now) / 86400000);
    if (daysLeft < 0) alerts.push(`🔴 ${domain}: EXPIRED ${Math.abs(daysLeft)} days ago`);
    else if (daysLeft <= 30) alerts.push(`🔴 ${domain}: expires in ${daysLeft} days — renew now!`);
    else if (daysLeft <= WARN_DAYS) alerts.push(`🟡 ${domain}: expires in ${daysLeft} days`);
  } catch (e) { alerts.push(`⚠️ ${domain}: lookup failed (${e.message})`); }
}

if (!alerts.length) { console.log(`All ${domains.length} domains OK`); process.exit(0); }
alerts.forEach(a => console.log(a));
process.exit(1);

Swap the workflow step to run: node monitor.mjs and drop the setup-python step — GitHub runners have Node pre-installed, so no setup step needed.

Further reading


Ready to try RDAP lookups?