[ratelimits] CH/TXT/id.server queries rate-limited

Vernon Schryver vjs at rhyolite.com
Fri Oct 26 15:02:57 UTC 2012

> From: Jay Daley <jay at nzrs.net.nz>

> > However, there is no practical number or kind of rate rate limiting
> > buckets that solves that problem without what I think are intolerable
> > false positives.  

> If you think that rate limiting with a granularity of one hour is
> likely to generate an unacceptable number of false positives, why do
> you think a granularity of one second will not cause a similar problem?

I was not thinking about increased false positives due only to a window
3600 times longer than the 1 second window in the BIND9 RRL patch.
Instead I was thinking about popularly proposed schemes using bloom
filters or hash tables without keys that take much less memory but
that have significant false positive rates by sometimes putting two
or more distinct responses in a single bucket.

The effective BIND9 RRL window is 1 second for streams of responses
that do not exceed the limit, which is the vast majorty of streams,
and which--again--are not really streams but unique (IP,qname,qtype)
It is only when the token balance goes negative that the effective
window increases toward the configured window.  This is a big help
for the size of the database.  It need only be big enough for X+Y
entries, for X=peak per second rate of distinct, conforming responses
and Y=peak per minute rate of non-conforming responses.

> However we still have the problem that someone can use a server
> at the maximum rate limit allowed for an indefinite period (once
> they've worked out what that maximum rate is as per my message to
> Paul).  The only way I can see to stop this is a second token bucket.
> Currently the minimum lifetime of a bucket is the window length,
> so why not add a second bucket for that window length.

> rate-limit {
>     responses-per-second 25;
>     window 5;
>     responses-per-window 75;
> };

> I agree that memory etc becomes a problem but all the above is doing
> is adding a new counter to the rate limiting record, not changing the
> number of records, or accesses to the records.

On the contrary, that would increase the number of records by a factor
of 5 (1 second vesus 5 seconds).  Unlike rate limiting IP packets
and HTTP client IP addresses, at timescales shorter than DNS TTLs,
DNS (IP,qname,qtype) streams are almost all single, unique respons/requests.
They are not streams.  A DNS response rate limiter needs about as many
buckets as window*qps.  In my code, practically all buckets are discarded
or recycled after 1 second.  Your code would need to keep all buckets
for 5 seconds before discard almost all of them with final counts of 1.

Besides, you would get about the same effect on bad guys trying to
skirting the limit by simply reducing the limit as in:

    rate-limit {
        responses-per-second 5;
        window 5;

A third and even bigger issue is that you're not entirely solving
a problem that does not yet exist and I doubt will ever exist.
You're not entirely solving it because the bad guy need only add
more open resolvers to get below your longer window limit.  I doubt
the problem will ever exist because of various costs and benefits
to bad guys of various other tactics.

> When I've had rate limiting built before it used memcached and I suspect
> memcached on the same server will provide suitably low latency and
> so all that matters is enough RAM.

I understand memcached to be a distributed cache for databases.
That's a good idea for distributing and speeding up databases, but it
would be a disaster for DNS response rate limiting.  DNS RRL requires
latencies of tiny numbers of microseconds, because the total work for
a DNS transaction is a small number of microseconds.  Speeding up a
distributed database with typical delays of hundreds to several thousand
microseconds is an entire different problem.  No competitive authoritative
DNS server implementation delays responses for disk accesses or network

That said, there is an aspect of DNS response rate limiting that
could use some network chitchat for something of a distributed
database.  Large authoritative DNS servers use lots of computers.
It would be good if they could share blacklists.

Vernon Schryver    vjs at rhyolite.com

More information about the ratelimits mailing list