APNIC-sponsored proposal could vastly improve DNS resilience against DDoS
The domain name system (DNS) is vital to the internet's operations, which makes it an obvious target. The architecture of DNS can even amplify the effect of some distributed denial of service (DDoS) attacks. But a new technique could help change that.
"We think this is a big thing," said Geoff Huston, chief scientist at the Asia Pacific Network Information Centre (APNIC).
"We think this is a valuable step towards changing the way we think about DNS and DNSSEC [the protocol to encrypt and authenticate DNS information], and actually using it in a way that helps us all get over the rising torrent of the Internet of Tragically Stupid Things," Huston said in his keynote address to the 44th APNIC Conference in Taichung, Taiwan last Tuesday.
The technique, "Aggressive Use of DNSSEC-Validated Cache", was developed by Kazunori Fujiwara at Japan Registry Services, Akira Kato at Keio University in Yokohama, and Warren Kumari at Google. It is defined in RFC 8198.
Huston provided an approachable explanation in a blog post in February this year, and in the slide deck [PDF] accompanying his keynote.
The DNS architecture is a hierarchy. When a domain like example1.example.com is resolved to an IP address, or some other lookup is done on that domain, the request is passed up to one of the so-called root servers. That server provides the IP addresses for the servers that can resolve .com, which in turn provide the addresses for the servers that can resolve example.com, and those servers finally provide the addresses for example1.example.com.
To improve performance, the answers to requests are cached locally, whether they're successful, or whether they return NXDOMAIN for "non-existent domain".
Performance and resilience have also been improved by upping the number of root servers, from the original 13, to hundreds. Since 2002, APNIC alone has assisted or sponsored at least 29 servers in 21 regional economies.
Nevertheless, performance is still a problem. And flooding DNS servers with bogus requests, which are then passed up to the root servers, or to a specific domain's DNS servers, can result in an effective DDoS attack.
"You ask for names that don't exist, because if they don't exist they're not in anyone's cache. If they're not in anyone's cache, they'll go straight up to the root. So as long as you can ask for different non-existent names all the time, all of your queries are going to go to a local root server. And if you can ask enough of these questions, like ooh, 10 million a second, 100 million a second, all of a sudden you have the recipe for an attack," Huston said.
Just such an attack happened on October 21, 2016, when the Mirai botnet flooded the servers of Dyn DNS with a DDoS attack topping out at 1.2 terabits per second.
"When you get that much data, it's not just the servers that die. The wires melt, the entire infrastructure surrounding that melts, because [a terabit] is more than anyone can handle," Huston said.
"These attacks are incredibly easy, and incredibly effective."
The cause, Houston said, is the "great and wonderful internet of redundant 20-year-old software built to the cheapest possible price they can ... The entire internet is toxic. It radiates in the dark. It is that bad."
The new technique works by having servers return not just an NXDOMAIN response for the domain that was queried, but authoritative NXDOMAIN responses for the entire span of possible names that include the one queried.
If you were to query the root server for the non-existent name www.example. (not www.example.com) the response would say that there are no domains at all between, say, everbank. and exchange.
"We can cache this range response, and use it to respond to subsequent queries that fall into the same range," Huston said. No need to pass the request up the line.
"All of a sudden those five, 10, million recursive resolvers, instead of being your attackers, become your defenders. And you've co-opted an army far, far bigger than your own root server to actually defend the root against attack. So that's a really, really big win."
APNIC is sponsoring the development of this functionality in the next version of BIND, the most widely deployed DNS server software, which is expected to be ready by early 2018.
Knot, an open source DNS server from the Czech Republic, has listed the technique in their 2017 development plan. The developers of the Unbound DNS server are cited in RFC 8198, which means they're also likely to support the technique soon.
While aspects of the proposal are "a very good thing", there are limits to the technique's positive effects, according to Cricket Liu, chief DNS architect and senior fellow at Infoblox, and co-author of the O'Reilly Media textbook "DNS and BIND".
"The implementation that APNIC has underwritten is for BIND, which is still the most popular open source DNS server, so that's a great place to start. But remember that it also will only work on recursive, BIND-based DNS servers with DNSSEC validation enabled," Liu told ZDNet.
Huston's studies in 2016 [PDF] showed that only around 26 percent of such queries were validated.
"Add to that my guess that a disproportionately high proportion of those junk queries to the roots come from older DNS servers that aren't administered well and haven't been upgraded in some time, and hence whose behavior won't be affected by the new feature in BIND," Liu said.
"The positive effect of the new feature may be less than we hope. But that's no reason not to move forward! The feature is not just useful because it'll reduce load on the roots. It also makes recursive DNS servers more independent of the root system, which is a very good thing."
from Latest Topic for ZDNet in... http://ift.tt/2hahnjM