DNS caching sites – are they really all that good?

I few years ago I found out about Opendns and it seemed a very cool idea: building a huge dns cache to be accessible for anyone on the internet from an edge server near you. I also liked the fact that it was another small project aiming high. You might add on top of that a bunch of other cool features such as anti-fishing protection and the service is even more attractive.

So I started using it right away and I was satisfied with it although I haven’t noticed much improvement. The coolest moment was when the dns servers of our ISP went down, so everyone in the office thought that the network link was down but I was still able to browse as usual.

But all is not so great. First of all Opendns doesn’t have many servers. If you live in the States maybe that’s not such a big problem because they pretty much cover the entire country. But in Europe they only have 2 nodes which are both in the West. If you live in other parts of the world you’re pretty much out of luck. The result of this is that in Europe or other parts of the world you will get pretty bad latencies compared to your ISP DNS. This can become noticeable or at least minimize the advantage of the huge cache. If your ISP is very large it probably has a very large DNS cache as well which makes the first reason of existence of Opendns pretty pointless.

Nowadays there are other services the most notable one being Google DNS. This one will probably solve pretty much of the latency problem because they appear to have more servers. On the other hand though they Google DNS doesn’t offer any other services other than pure and simple caching.

If we stop here one might argue that such a service might not bring much benefit but another one might also argue the problems (such as increased latencies) are negligible compared to the benefits especially of some value-added services such as anti-fishing filtering or geo-redundancy.

But there is also one major problem that in my opinion cannot be overlooked: Nowadays there are a number of sites that do DNS based geo-balancing (CDN style). When you ask the DNS server of one of these sites about a domain it manages it will return the ip of the server that is closest to you and they do this by looking at the ip that does the query. If you use your ISP DNS servers the DNS server of the site will see the ip of your ISP DNS server and return the ip of the server that is closest to that.

Public DNS caching servers use anycast for geo-balancing. The first obvious reason for doing so is simplicity: the user gets only 2 ips to enter as nameservers no matter where he lives and the anycast routing will insure that he will get to the one that is closest to him. The problem here is that there are several servers around the world with the same ip and each one of them might interrogate the website DNS server trying to resolve the domain. So the websites’ DNS server doesn’t really know which of the DNS caching servers performed the query and it assumes one of them.

But this is not actually how things happen in the real world (at least for some DNS caching networks). In the real world it’s true that the DNS caching servers use anycasting and several servers across the world share the same ip – but only for the client. When they interrogate the websites’ DNS server they use another ip which is unique (for that server only). But if you think this solves our problem think again.

Each DNS server that resolves domains according to client geolocation uses a databases that binds each ip (block actually) to a geolocation. There are a number of such databases out there offered by several companies and some of them are available for free. While I don’t have much experience with the payed ones the free ones have a tendency to match the ip (block) to company location. So a server of a company like Google will be reported as being at Google headquarters when it can actually be in a totally different part of the world. And that is really the big problem because it brings us many times back to square one.

The non-free geolocation databases claim greater accuracy but from my experience there is a serious problem regarding public caching DNS services resolving geo-balanced domains. Although I have done several tests in this matter you don’t have to go all this way to see that there is a problem: if try to resolve google.com with Opendns and Google DNS I get different results with servers in different networks with pretty high differences in latency. Guess which one returns the servers that are closer to me for this query 🙂

I have used Opendns for a couple of years or so but I have finally given up and I now use my ISP nameservers because large CDNs like Akamai and such have edge servers at my ISP. With public caching services I got servers that were thousands of miles away in other networks. While this is negligible if you just read a webpage it becomes very important for streaming services and such. This problem still needs to be addressed properly. But until then, as fashion requires, these services become more and more popular although the benefits are minimal and some big issues are not yet properly resolved.

Slashdot     Delicious  

Leave a Reply

You must be logged in to post a comment.