WHOIS Rate Limits Are Killing Your App
March 10, 2026
You build a domain checker, run 50 lookups, and get blocked. The WHOIS server returns "Query rate limit exceeded" or just closes the connection. Your IP is banned for hours. Your users see errors.
This is not a bug in your code. WHOIS servers enforce aggressive rate limits — and they don't tell you what the limits are.
How WHOIS rate limiting works
WHOIS runs on TCP port 43 — a raw text protocol with no authentication. The server's only defense against abuse is IP-based rate limiting. It counts connections from your IP, and after a threshold, it blocks you.
The problem: registries don't publish their limits. You learn the threshold by getting blocked. Some registries tolerate a few dozen queries per minute. Others block after 2-3 requests. Some block for minutes, others for 24 hours. The only way to know is to test — and testing means getting blocked.
Some registries are more aggressive than others, but they all enforce limits — including Verisign (.com, .net).
Why it gets worse at scale
If you are building a domain monitoring tool, a registrar integration, or a bulk domain checker, several problems compound:
Shared IP penalties. If you run on a cloud provider (AWS, GCP, DigitalOcean), your IP is shared with other tenants. Their WHOIS queries count against the same limit. You might get blocked before making a single request.
No error standard. Some servers return a text message ("Query rate limit exceeded"). Some return an empty response. Some silently drop the connection. You need custom error detection per registry.
No retry signal. Unlike HTTP 429 with a Retry-After header, WHOIS has no standard way to tell you when the ban lifts. You're left guessing: retry in 1 minute? 1 hour? 24 hours?
Per-TLD limits. Each registry has its own server and its own limits. Looking up 100 .com domains might work. Looking up 100 domains across 50 TLDs means hitting 50 different servers with 50 different rate policies.
The workarounds everyone tries
The workarounds are always the same:
Rotating proxies — distribute queries across multiple IPs. Works until the registry blocks entire IP ranges, which many do. It also adds cost, latency, and may violate the registry's terms of service.
DNS pre-check — query DNS first to skip unregistered domains. Reduces WHOIS calls, but a domain with DNS records might still be expiring, and a domain without records might still be registered. You still need WHOIS for anything definitive.
Local caching — store results in your database. Helps for repeated lookups, but not for new domains. And WHOIS data goes stale fast — expiration dates change, nameservers move, registrars transfer.
Sleep between queries — add 5-10 second delays. Checking 10,000 domains at 5 seconds each takes 14 hours. Your users are not going to wait.
These are all band-aids. They reduce the pain without solving the problem: WHOIS gives you no reliable way to know when you will be blocked, or for how long.
How RDAP handles rate limiting better
RDAP runs over HTTPS and returns JSON. It has rate limiting too. But there is a fundamental difference: RDAP tells you what happened.
When an RDAP server rate-limits you, it returns a standard HTTP 429 status code. Some servers also include a Retry-After header telling you exactly how long to wait — though not all do. Either way, your code can detect and handle it:
import requests
import time
def rdap_lookup(url, max_retries=3):
for _ in range(max_retries):
resp = requests.get(url, headers={"User-Agent": "MyApp/1.0"})
if resp.status_code == 200:
return resp.json()
if resp.status_code == 429:
retry_after = int(resp.headers.get("Retry-After", 60))
time.sleep(retry_after)
continue
resp.raise_for_status()
raise Exception("Max retries exceeded")
This works for a small number of lookups. At scale — thousands per day across hundreds of RDAP servers — you need persistent backoff state per server, request queues, cache layers, and monitoring. Each server has different limits, and those limits change over time.
ICANN has required RDAP support from all registries since 2024, so the number of RDAP servers — and the rate limiting complexity — is only growing.
At scale: let someone else handle it
This is the problem we built RDAP API to solve. We track rate limit behavior across hundreds of RDAP servers — learning each server's thresholds, backing off proactively, and caching aggressively. When a server starts returning 429s, we adapt automatically. You just get clean JSON:
curl -H "Authorization: Bearer YOUR_TOKEN" \
"https://rdapapi.io/api/v1/domain/shopify.com"
{
"name": "shopify.com",
"status": ["client delete prohibited", "client transfer prohibited", "client update prohibited"],
"registered": "2006-12-22T07:43:49Z",
"expires": "2033-12-22T07:43:49Z",
"registrar": "MarkMonitor Inc."
}
Or check up to 10 domains at once:
curl -H "Authorization: Bearer YOUR_TOKEN" \
-X POST "https://rdapapi.io/api/v1/bulk/domain" \
-d '{"domains": ["shopify.com", "stripe.com", "vercel.dev"]}'
You stop thinking about rate limits. We handle per-server backoff, caching, and retries on our end. Plans start at $9/month for 30,000 lookups — see the docs to get started.
Further reading
- The RDAP JSON Response Decoded — field-by-field walkthrough of RDAP responses
- Bulk Domain Lookup API — check 10 domains in one request
- WHOIS API Alternatives in 2026 — provider comparison