Archive for the ‘General’ Category

Some PHP statistics

Monday, January 21st, 2013

If you ever had your C++ course you would then know something about object oriented programming, memory allocation and such. But years have passed and those words don’t have the same meaning anymore. Of  course nowadays because of languages like PHP you don’t have to worry about data types and year-lasting-debugging memory leak bugs. Everything is simple nowadays but this comes at a cost.

Of course nowadays memory is so much cheaper that you can throw in as much as you like while getting paid by your memory supplier for it. Of course nowadays you don’t have to worry about processing power because everything is scaled on an unlimited number of cores or even in the aetherous almighty cloud where all apps get eternal happiness just for being there.

But every once in a while you get to write an application where memory consumption and required processing power do matter. Of course if you’re one of the “nowdays people” you don’t even know what this means. But for people like me it’s still important to write efficient applications.

PHP does not help much here. If you get past of the nowadays upgrades like interfaces, traits, generators and of course the magic methods (can’t get any more magic than that, can you) you will find that all these waste a lot of you’re time if you really want to put them to work and you will eventually miss some of the good old day simpleness like multiple inheritance, simple malloc where you can write simple binary data in whichever form you choose. Well… here you’ll have some trouble with PHP.

Now suppose you want to just want to store some data in a reasonable amount of RAM and to access it conveniently. Perhaps you need an array or a list or a hash table. It sounds simple but it’s not so simple if you care about your resources in PHP. So I devised a little benchmark to see how different data structures compare and how they could be put to any usefulness. I just wanted the most simple thing: to store many ints. Here are the results that I got:


Data Structure Bytes / Item Insert Time (microseconds) Access Time (microseconds)
QuickHashIntHash 20 0.87 1.37
SPLFixedArray 48 0.09 0.6
QuickHashStringIntHash 52 1.46 0.9
SPLDoublyLinkedList 96 0.32 N/A
array 149 0.24 0.49

As you can see I ordered the table according to memory consumption. And the winner is… QuickHashIntHash with an astonishing 20 bytes per item. Not bad, not bad at all but wait a minute… I just want to store some ints which on my 64-bit computer have 8 bytes. So in order to store 8 bytes I have to use 20 bytes but at least they’re indexed which in the end turns out to be quite a performance even if I waste 60% of my memory just for indexing. The timings are not bad at all compared to the other competitors and if we look how the others competitors perform in memory consumption QuickHashIntHash is quite a winner.

If you never cared about memory efficiency because you have a lot of it take a look at the most convenient and most used ever: array. It takes 149 bytes for storing 8 bytes which means that you waste more than 94% of your memory. RAM may be cheap but if you bought let’s say 32 GB your useful data is actually 1.7 GB. So you could have forgotten about the RAM upgrade if you had a memory efficient app. And this is somehow called progress – making a 30 GB RAM upgrade so you can store the same amount of data as yesterday. It’s good that RAM is so cheap nowadays after all.

If you ever care about the program I used to build the table here it is:

$a = new QuickHashIntHash(1e7);
for($i=0; $i<1e7; $i++) {
$a[$i] = rand();
echo "\nMem usage: ";
echo ($mstop-$mstart)/$i;
echo "\nInsert time: ";
echo ($tstop-$tstart)*1e6/$i;
for($i=0; $i<1e7; $i++) {
echo "\nAccess time: ";
echo ($tstop-$tstart)*1e6/$i;


Of course there are a few things worth noting:

  • The program generates just one line from the table. In order to generate the other lines I had to change the data structure and re-run the program
  • For QuickHashStringIntHash I had to use string keys which I generated with md5(rand())
  • I measured how much it takes rand() assignment to a variable and md5(rand()) and extracted the values from the values obtained with the little benchmark program. Thus the results in the table don’t include rand() or md5() times and not even simple variable assignment times. Therefore they are close to reality.
  • Therefore you could argue that there’s no need for an “insert time” since the assignment is not taken into account. This column is actually important if you compare it to access time to see how access time depends on the size of the array if it depends at all.
  • For the SPLDoublyLinkedList the N/A value means that after about 15 minutes I got bored with the entire benchmark because it was obviously quite useless in this department. For all other data types the entire bechmark completed well below one minute.
  • I do have frequency scaling on my CPU so the results are influenced by this although I do believe that not significantly. You probably have this as well even if you disabled it everywhere. There’s always some obscure BIOS setting that silently overrides everything – trust me on this one.

One final word: if you ever need really efficient memory usage in PHP and you don’t mind getting your hands dirty use php://mem wrapper and write your own indexing/hashing function as needed.

An Ipv6 Crash Course

Wednesday, November 30th, 2011


It’s only a matter of time until we will be forced to switch from ipv4 (aka simply ip nowadays) to ipv6. The v4 ips are exhausted as we speak. Because we haven’t made a smooth transition until now (thank you big ISPs) we will probably be forced to move pretty quickly during the next few years. Major ISPs have already started deploying ipv6 to the mass market so far optionally and under the “testing” label. I expect the trend to get massive proportions next year (2012). So the internet as we know it today will change forever pretty soon.

The news didn’t reach the mass media so far but the technical media has been bombarded with such messages for quite some time. So in the technical world nowadays everybody knows that ipv6 means more numbers than ipv4 and thus allows everybody to have an ip (or several); even your fridge can get it’s own ip. Everybody also knows that this time the numbers are usually in hex instead of decimal, grouped differently and separated by colons instead of dots. But if you want to get your own ipv6 you might want to know more.

My ISP has recently deployed testing ipv6 for opting-in customers. So I was very eager to get ipv6-enabled and be ready for the future. As usual I used the good old Google to find resources about ipv6 deployment and much to my surprise the resources are pretty limited. There are some tutorials and blog posts that describe ipv6 setups to some extent but they are pretty much concentrated on ipv6 tunneling with some standard tunnel providers and that didn’t help me much. My ISP offers dual stack (ipv4 and ipv6) and prefix delegation both over PPPoE on top on FTTB. In order to make the problem worse they provide dynamic ip (and prefix) allocation so the vast majority of tutorials that configure static ips were not so useful to me. I banged my head too much time to get this setup to work with my entire lan and with an already pretty customized setup. The documentation I could find on the internet failed to inform me of some basic things that could have get me going fast so I decided to write this post in order to help the others but also for me not to forget how it was and how to do it again.

Technical Background

There are more changes to ipv6 than numbering and you need to understand the basic principles of these changes before doing anything else.

  • Address scope – in the good old ipv4 you just had ips (routable or private) and net masks. In ipv6 you also have address scopes. I’m not going to repeat information that you can find elsewhere and which serves no real purpose to you get you going quickly. What you should know about this is that each interface should have a fe80 prefixed address (which is marked “scope link” in linux) and which is not accessible from the internet. If your interface is configured properly you will also get a “global” ip which is the address with which the interface is accessible from the internet.
  • Stateless and Stateful – In the good old days of ipv4 we used to have DHCP to get the ips assigned. In ipv6 you don’t have to use DHCP at all. All ipv6 equipment can be configured to use Router Advertisments (RA) which is called Stateless autoconfiguration. This means that the router periodically sends ICMPv6 messages to every host announcing prefixes that can be used by the client. This is a feature not a bug – so you can get any ipv6 client going with no dhcp at all; all you need to do is plug it in and it should get an ip and route. But you still have DHCPv6 which is entirely optional in ipv6. It can be used in stateless mode as well but it’s usually used in Stateful autoconfiguration which actually means that the DHCPv6 server provides additional information such as DNS servers, domain name, NTP servers etc. As you can see DHCPv6 can serve much more information than the v4 counterpart which only server ip, mask, gateway and dns.
  • Prefix Delegation (PD) – In ipv4 we usually had to use NAT to connect our local network. In ipv6 we have so many ips that we don’t have to use NAT anymore: everyone (everything actually) can get it’s own ip. As a matter of fact in standard ipv6 there is no NAT at all. But there must be a way of dynamically assigning the needed ipv6 subnets to the clients and this is called Prefix Delegation (PD). With PD the ISP assigns a subnet to the client so that each client interface (even the fridge) can get it’s own ip that is accessible over the entire internet. The client can also further split the assigned subnet into smaller subnets and thus have more LANs connected. The sky is the limit here: the client can do whatever he wants with his subnet. As far as I know IANA recommends that at least a /64 be provided to the client so each client can have an entire ipv4 private intenet (as a matter of fact internet x internet hosts).
  • Temporary Addresses – In ipv6 the traditional and simple way of automatically generating ips was based on using the MAC of the interface. Some privacy concerned people added the possibility of using somewhat more randomly generated ips. If you wish to have the latter you must configure linux with “net.ipv6.conf.eth0.use_tempaddr = 1” but the other ip will still be preferred. If you set it to “2” the temporary address will be preferred. If you have connectivity problems you might want to try to set it to 0.
  • Utilities – In the linux world we have the same set of utilities from ipv4 only that we have to add a 6. So we have: “ip -6”, “route -6”, “ip6tables”, “traceroute6”, “ping6” and so on. Ifconfig and mtr need no 6.

Getting It Going

With ipv6 it’s supposed to be simple. If you have an ipv6 provider all you have to do is accept his RAs and you should have an ip and a route. In linux you can enable or disable the acceptance of RAs for each interface either with sysctl or via /proc. With sysctl you have for example “net.ipv6.conf.eth0.accept_ra” which can be 1 or 0 and thus enables or disables receiving RAs on eth0. Instead of “eth0” you can have any other interface but also “all” or “default” which are pretty much self-explanatory. You can test your connection with “ping6”

Getting the Router and LAN Going

This is not as trivial as it seems unless you have an already configured or “web-interfaced” router that only has an “enable ipv6 routing and let me do my thing” switch. So if you’re the admin and you also happen to have a linux router and hosts you should consider the following:

  • You need a subnet assigned by your ISP that you can use with your own equipment. You do usually get a /64 or more subnet by RA from your ISP but this may NOT necessarily be the subnet delegated to you. This can actually happen to be just a single host ip for use with your computer or router external interface (connected to your ISP). The subnet in this case is the subnet formed by all routers and computers connected to that ISP router.
  • If your delegated subnet is assigned statically (always the same and guaranteed never to change) than things can be much simpler because you can manually assign static ips throughout your LAN(s).
  • If your delegated subnet is assigned dynamically you need to use PD. In order to use PD you MUST use DHCPv6. There are some few ipv6 enabled DHCP clients out there: ISC DHCP – doesn’t seem to support PPP interfaces; Dibbler – I couldn’t make it to generate radvd.conf nor to make it assign ips to other interfaces; Wide-DHPv6 – extensively used in Openwrt world but it seems quite outdated – anyway it works. After you get such a DHCPv6 client you must configure it to use the ISP connected interface and request PD
  • When you get your delegated prefix you must have some way to distribute it over to your hosts (or LANs). Again you can do this statically if you have a static prefix. but the preferred way is to generate your own RAs. The only linux utility that automatically sends RAs (as far as I know) is radvd. So get radvd, install it and configure it to use the delegated prefix and advertise on your LAN interface. On some configurations (eg: Openwrt with Wide-Dhcp6) you will see that radvd.conf is automatically generated by the DHCPv6 client and radvd might also be started by it. This is actually the heaven of autoconfiguration – otherwise you must get back to your own scripting skills. Either way don’t forget to configure host interfaces (or other routers) to accept RAs.
  • Everyone should now have it’s own ip from the ISP delegated subnet. But you also have to configure routing. With ipv6 it’s supposed to be easier than ever. All you have to do in linux is to enable forwarding for the needed interface. You can do this with sysctl or via /proc. With sysctl you have something like “net.ipv6.conf.eth0.forwarding” which must be 1. It’s the same as in the ipv4 world; the difference it’s that with ipv6 that’s all you need to do in order to get the router to route (there is no NAT anymore). Remember that you get such a setting for each interface (PPP or tunnels included) and you must set forwarding for the one that you need (or all 🙂 of them). Also remember that you also have “net.ipv6.conf.all.forwarding” and “net.ipv6.conf.default.forwarding”.
  • Last but not least don’t forget that you also have ip6tables. If you didn’t configure it you might want to do that as soon as possible. If you did configure it and you have connectivity problems maybe you configured it the wrong way. For instance in the ipv4 world Microsoft-minded people used to block everything they could including ICMP. In the ipv6 world RAs are ICMPv6 messages: you do the math.
  • RAs are forwarded as well if forwarding is configured appropriately. This is quite a feature if you have several routers in your LAN
  • You might get “unreachable” routes in your routing table. I’m not going to talk about what they are and why they appear. The important things to know are that they sometimes appear “out of the blue” and they might prevent an otherwise working environment from working. So if you have connectivity problems and you know you have done everything right you might want to check your routing table for “unreachable” routes and try to delete them


Using btrfs per-file and per-directory compression

Wednesday, August 31st, 2011

According to btrfs wiki per-file and per-directory compression has been added in 2.6.39 kernel (May 2011). I have spent a lot of time on searching how to use this. I had no luck with googling on this subject but after digging deeper and deeper it turns out that it’s actually very simple solution.

You may not know or remember that linux actually has “file attributes”. You may also not remember that they are very easy to use with some small CLI tools like “lsattr” and “chattr” which you most probably already have installed on your linux machine. Of course there is also a “compress” attribute which you can use.

So to enable compression for a certain file or directory do:

chattr +c <filename>

and to disable compression do:

chattr -c <filename>

Of course this doesn’t do anything to the existing file or directory. Setting the compression flag means that data that will be written from now on will be compressed. So if you consider the following commands:

cat /dev/zero > test
chattr +c test
cat /dev/zero >> test

You will find that data written by the first cat remains uncompressed and data written by the second cat is compressed. Of course you have to stop cat commands at some point with CTRL+C. You can check this behavior by letting cat work for a relatively long time and watching the file size and the disk free space in the meantime (or just the hdd led maybe).

However if you do the following succession:

cat /dev/zero > test
chattr +c test
echo 0 >> test
echo 0 >> test
cat /dev/zero >> test
lsattr chattr +c test

You will find that the second cat still writes uncompressed and lsattr doesn’t list the “c” flag as enabled. The last chattr command will produce “Invalid argument while setting flags on test”. This is probably a bug in the kernel version that I have (3.0.3-zen). If you echo 2 zeros instead of 2 echos with single zero this bug doesn’t appear. Before the last cat things also seem to be in order.

The Mysql Saga

Monday, January 24th, 2011

Mysql has been around for about 15 years and it was the db choice of the day for everyone wanting to go the linux way. After all Mysql is the M in LAMP. But this peaceful history of M has become quite turbulent over the past couple of years.

First there was the big Sun acquisition. Everything was looking fine, people got exciting about new opportunities; Sun was a major innovator in the IT world. But soon enough things didn’t look so bright when some of the Mysql developers and creators left Sun. You probably asked yourself back then where Mysql was heading. Was the world as we knew it over? Is there still life after Mysql acquisition?

That’s when I started looking for alternatives or forks for the first time and there was not much to find. On the other hand Sun was promising to keep things going the same as always and then the future looked bright once more.

It could have ended here but it hasn’t. Oracle took over Sun – and that was maybe a bigger blow than the first one. Oracle makes big bucks after all and from what? Hmmm…. it’s own very expensive DB. Maybe you should revisit the forks once more… At least some people did.

Oracle promised just as Sun had promised before to keep Mysql free and open source for the benefit of all mankind. But soon enough Oracle discontinued OpenSolaris and this pretty much made everyone not only suspicion of Oracles’ intentions but actually triggered people to move on from all Oracle open source acquisitions. That’s how projects like Illumos and LibreOffice appeared and that’s probably how Mysql forks and alternatives will begin to thrive.

In the meanwhile Oracle has recently produced a new stable branch: Mysql 5.5 which is Oracle promises to be a real revolution if you look at the marketing slides. This was the first new stable branch in years for Mysql so it should be something that people really looked forward to especially with the bunch of long standing and well known bugs in 5.1.

On the other side the two most interesting competitors for Mysql are MariaDB and Drizzle. The first is started by one of the Mysql creators which really means something and it also aims to stay binary (and otherwise) compatible with Mysql. The latter is quite the opposite: Drizzle strips down everything unneeded from Mysql and go their own way with the final goal being performance in a new world dominated by cloud computing. It remains yet to be seen if any of them succeed.

The problem for people like me who make a living in the IT world is that we don’t want any problems; especially with the DBs. And that’s probably one of the biggest problems that the Mysql alternatives have. On the other hand people from the open source world don’t like the Oracle approach so something has to be done about it. So far I haven’t found anything worth mentioning about successful production deployments of MariaDB and Drizzle. But I suspect this is about to change in the following years.

I’m going to start replacing Mysql with MariaDB on my testing machines just to see how things go. MariaDB is the obvious choice for people like me who like Mysql and open source because it keeps the open source flame alive but it should also offer painless transition and more features and all this is guaranteed by having one of the people who started Mysql itself.

On the other hand Drizzle is a very interesting proposal. I like performance because of my geeky nature. I’m attracted by performance, I’m lured by performance, I want to bathe in performance lust. Dropping the features that it drops from Mysql seems like a good idea because I wasn’t really using them anyway. On the other hand having to convert DBs is not a very tempting idea. And last but not least having a complete rewrite must come by definition with new bugs. How many and how bad remains yet to be established.

In conclusion I will give MariaDB an extensive try as soon as possible and I would then test Drizzle to see if it really delivers what it promises and what the compromise in features and stability is.

Btrfs and the 2k file problem – tests and my experience so far

Tuesday, August 3rd, 2010

I first heard about btrfs a few years ago. I also heard about how great it would be: some people call this the future linux mainstream filesystem with ext4 being just a temporary solution until btrfs becomes ready for prime-time. I have also heard about how it would still take quite some time before it becomes stable. In the meantime it seems to have become quite stable and have its’ bunch of early adopters which seem to be quite happy with it. It was also supposed to have its’ “disk format might change” tag removed in version 2.6.34 of the linux kernel which obviously was postponed for some reason.

In the light of these events I decided to give it a try but first I wanted to make sure I knew what I was getting into. So after some basic research on the net you will easily find out about the btrfs 2k problem. I was intrigued by this because I know that all filesystems have a problem (bigger or smaller) when it comes to small files and particularly when it comes to files smaller that their block size. Since people complained about how 80% or so of the space would be wasted on btfs with 2k files but never took the time to perform the same test on other filesystems for comparison I decided to do my own tests.

One word of warning though: I performed these tests for my own use and decided to share them in case anyone would find them of any use – not the other way around. So these tests are by no means scientific, accurate or whatever.

The Tests

I have been using xfs for several years and I have been quite satisfied. So I obviously chose to run the tests for xfs and also for ext4 which became (or is about to become) the new linux de facto default.

In addition to the 2k test (which I suspected to be the worst case scenario for btrfs) I decided to also perform a similar test with additional file sizes:

– 1k – it’s even smaller than 2k and you we can see if the 2k is a worst case for btrfs (and others) or the problem is getting worse as the files get even smaller

– 4k – filesystems have a tendency to use 4k as default block size nowadays so this one should be a maximum space efficiency scenario

– 6k – to see if the problem is related to the block size difference (is the 2k problem on top of the 4k which is supposed to give us maximum efficiency) or lies rather somewhere else.

I decided to leave all defaults in place because what is a default good for if not to give us the best compromise.

The first step was to create a 1GB file and a loop device for it:
dd if=/dev/zero of=/tst bs=1m count=1k
losetup /dev/loop0 /tst

Then I created a script that would create the filesystem and then the files:
mkfs.btrfs /dev/loop0
mount /dev/loop0 -o max_inline=0 /mnt/tmp
for i in $(seq 1000000); do dd if=/dev/zero of=/mnt/tmp/file_$i bs=4k count=1; done

I stop the script manually when it gives “out of space” errors. After that I see the occupied space:
du -sh --apparent-size /mnt/tmp

I used the “du” approach for convenience and  the “apparent-size” parameter apparently gives correct results regarding the size of the files rather than the occupied disk space. So the difference between the size of filesystem (1GB) and the whatever “du” gives would actually be wasted space.

Then I would unmount the system:
umount /mnt/tmp

And finally I prepare the script for the next run modifying parameters accordingly.

Test Results

The raw test results are these:

1k 2k 4k 6k
btrfs 282 200 590 519
xfs 235 464 922 714
ext4 66 130 258 386

All values are total file sizes (as reported by “du -sh –aparent-size”) expressed in MB.

I don’t know about you but I was pretty surprised by these results. If you just like numbers you can say that xfs is “za best” but still sucks for small files, btrfs is somewhere in between and ext4 is really hopeless.

But I’m not one of those people so looking more carefully I remember saying about using defaults. So I suspect ext4 to have a problem with the maximum number of inodes – the results clearly indicate this to me. So I expect ext4 to perform pretty well if tuned properly – unfortunately the people reporting 2k file problem don’t seem to like tuning (at least not for btrfs) and since I don’t really consider running ext4 I didn’t get to tuning land as I expected this to take quite a long time. Even more unfortunate is the fact that real filesystems might be a combination of files of all sizes and this might prove difficult to tune. Anyway I expected more of the defaults.

The winner still seems to be xfs which is the only one providing expected results (or close) even with the defaults. You can clearly see that its’ space problem comes from the block size but it’s still coping well.

Although btrfs seems to have better results than ext4 I don’t know how they might be tuned to get better. I tried using “max_inline=0” (as suggested by Chris Mason) with the 4k size and unfortunately I got the same result. I didn’t have time to test with other file sizes but even if it would help with th 2k problem I doubt that it would help beyond the 4k result.

I personally would trade space for speed or features for the 2k (or less) scenario in some situations. What I’m more worried about is the very poor 4k result since btrfs uses 4k blocks as default as far as I know.

Some Test Peculiarities

During the tests I noticed that btrfs reported sporadic “out of space” errors before giving up entirely. I suspect it tries to optimize and squeeze more files before it gives up but unfortunately this process does make some writes to fail in the meanwhile. This could be the problem reported here.

Xfs seems to have its’ own optimizations because “df” reports some occupied space value now and less later after writing some more files. However no writes fail in the meantime.

Real World Impressions

I decided to give btrfs a try despite these results. I first installed it on my laptop and then on my root and var desktop partitions. On my laptop I have a rather old 5400 RPM hdd and on my desktop I have 2 x 7200 RPM rather new hdds in software raid0. Laptop sequential hdd throughput is somewhere around 45 MB/s while desktop throughput can get up to 90 MB/s for a single hdd. I use Arch Linux and yaourt package manager and running a package search (yaourt -S pkgname) took about 4 seconds on laptop btrfs and 18 seconds on desktop raid0 xfs. I was very impressed by this. Of course I ran the command just after a reboot in order to be sure that those files are not in cache. A “du -s” on some large folder seems to be much faster on btrfs as well so I suspect that it btrfs trully shines in metadata handling. On the other hand these scenarios always seemed to be the weaknesses of xfs compared to any other filesystem.

If you would like to tune your btrfs you would better think twice. I tried using bigger nodesize and sectorsize. Everything works fine until an umount or a sync: they both rest forever. This seems to be the problem described here.

The wasted space in the real world doesn’t seem to be that much. I didn’t run accurate measurements but the values reported by “df”, “btrfs-show” and such seem to be pretty close to the ones that I used to have on xfs. On the other hand random access throughput seems to have dropped a little compared to xfs. Again I have not tested this – it’s just an impression.

Another thing you should know before trying btrfs with raid0 is that you do need an initrd which would basically run “btrfs device scan”. Otherwise the kernel will not be able to find the btrfs raid setup and it will end with a kernel panic. This is true even if btrfs is compiled in-kernel but it’s not true if you run bt

A Big Thank You

When testing and reporting problems people seem to forget that there are literally years of work going into a project such as btrfs and yet btrfs is given away for free (as in “no charge”). So I would like to say a big “thank you” to all developers that have worked on this project. I do believe that the future is bright for btrfs.

DNS caching sites – are they really all that good?

Thursday, April 29th, 2010

I few years ago I found out about Opendns and it seemed a very cool idea: building a huge dns cache to be accessible for anyone on the internet from an edge server near you. I also liked the fact that it was another small project aiming high. You might add on top of that a bunch of other cool features such as anti-fishing protection and the service is even more attractive.

So I started using it right away and I was satisfied with it although I haven’t noticed much improvement. The coolest moment was when the dns servers of our ISP went down, so everyone in the office thought that the network link was down but I was still able to browse as usual.

But all is not so great. First of all Opendns doesn’t have many servers. If you live in the States maybe that’s not such a big problem because they pretty much cover the entire country. But in Europe they only have 2 nodes which are both in the West. If you live in other parts of the world you’re pretty much out of luck. The result of this is that in Europe or other parts of the world you will get pretty bad latencies compared to your ISP DNS. This can become noticeable or at least minimize the advantage of the huge cache. If your ISP is very large it probably has a very large DNS cache as well which makes the first reason of existence of Opendns pretty pointless.

Nowadays there are other services the most notable one being Google DNS. This one will probably solve pretty much of the latency problem because they appear to have more servers. On the other hand though they Google DNS doesn’t offer any other services other than pure and simple caching.

If we stop here one might argue that such a service might not bring much benefit but another one might also argue the problems (such as increased latencies) are negligible compared to the benefits especially of some value-added services such as anti-fishing filtering or geo-redundancy.

But there is also one major problem that in my opinion cannot be overlooked: Nowadays there are a number of sites that do DNS based geo-balancing (CDN style). When you ask the DNS server of one of these sites about a domain it manages it will return the ip of the server that is closest to you and they do this by looking at the ip that does the query. If you use your ISP DNS servers the DNS server of the site will see the ip of your ISP DNS server and return the ip of the server that is closest to that.

Public DNS caching servers use anycast for geo-balancing. The first obvious reason for doing so is simplicity: the user gets only 2 ips to enter as nameservers no matter where he lives and the anycast routing will insure that he will get to the one that is closest to him. The problem here is that there are several servers around the world with the same ip and each one of them might interrogate the website DNS server trying to resolve the domain. So the websites’ DNS server doesn’t really know which of the DNS caching servers performed the query and it assumes one of them.

But this is not actually how things happen in the real world (at least for some DNS caching networks). In the real world it’s true that the DNS caching servers use anycasting and several servers across the world share the same ip – but only for the client. When they interrogate the websites’ DNS server they use another ip which is unique (for that server only). But if you think this solves our problem think again.

Each DNS server that resolves domains according to client geolocation uses a databases that binds each ip (block actually) to a geolocation. There are a number of such databases out there offered by several companies and some of them are available for free. While I don’t have much experience with the payed ones the free ones have a tendency to match the ip (block) to company location. So a server of a company like Google will be reported as being at Google headquarters when it can actually be in a totally different part of the world. And that is really the big problem because it brings us many times back to square one.

The non-free geolocation databases claim greater accuracy but from my experience there is a serious problem regarding public caching DNS services resolving geo-balanced domains. Although I have done several tests in this matter you don’t have to go all this way to see that there is a problem: if try to resolve with Opendns and Google DNS I get different results with servers in different networks with pretty high differences in latency. Guess which one returns the servers that are closer to me for this query 🙂

I have used Opendns for a couple of years or so but I have finally given up and I now use my ISP nameservers because large CDNs like Akamai and such have edge servers at my ISP. With public caching services I got servers that were thousands of miles away in other networks. While this is negligible if you just read a webpage it becomes very important for streaming services and such. This problem still needs to be addressed properly. But until then, as fashion requires, these services become more and more popular although the benefits are minimal and some big issues are not yet properly resolved.

How to get rid of extra traffic for your website aka business as usual for the big guys

Thursday, September 10th, 2009

I use Linux and Firefox whenever I whenever I get to choose. I also visit the Apple trailer site just a few times a year only to kill some time and just because they conveniently offered HD trailers. Getting the trailers from this site to work in Linux Firefox was not done by default – you have to do some work of some kind in the first place. But this wasn’t very hard to do after all.

… Until now because today I just had some extra minutes to kill and surprise: I can’t play any trailers from Apple site. First I thought they it might be a temporary network/file issue of some kind so I tried several movies. I have also tried the links with several players: xine, mplayer, vlc – none of them works. I did a quick search on the web and discovered that other people have the same problem and most of them share a common disgust of this “feature” by Apple and decided to look elsewhere. Apparently the Apple guys made some changes that either by Javascript or not check the user agent. Apparently they did this in order to promote their quicktime player which doesn’t come in Linux flavor.

What they succeeded in doing however seems to be getting rid of some extra traffic coming from Linux users. There are plenty of trailer websites on the net and after all we’re talking about trailers which are promotional stuff in the first place. So the big company decides to push the promotional stuff away from some users just to promote some other stuff that they give away for free. “Their stuff is not so free” you might say, “they just try to sell their notebooks and ipods and stuff”. But they also had Quicktime for Windows which I don’t think they’ve discontinued. So the push is not necessarily in their direction but rather away from anything free.

To me this seems another one of those mistakes done in their greediness by big companies which try to get yet another crumb of every penny they can get. Let’s not forget that Microsoft had a tradition in doing such moves which did no good on the long term. Apple should not forget that much of its share of computer business comes from previous Microsoft users. And these Microsoft users were indirectly drawn away from Microsoft by Microsoft’s use of such business “techniques”.

I have a theory about this tremendous Apple success which says that current Apple computer customers were drawn away from Microsoft by Linux anti-Microsoft lobbying. Linux users also have and probably always will complain about Microsoft’s policies, apparent lack of security and so on and so forth. On the other hand Microsoft concentrated its response on how Linux is very difficult to install, almost impossible to use and very immature and ultimately incoherent and untrustable because that’s what you get for free. Apple put its name back in peoples’ ears with their ipod, which was a true master hit. It was then convenient for many people to notice it as an alternative to the “big bad of the day” because Apple was not free. It was in fact even more expensive thus it must have been better altogether. That came of course with Apple design and quality which you have to admit are two of their distinctive features.

This is how I think Apple got its’ customers from Microsoft by Linux users lobby. Apple never really gave back to Linux anything and now apparently it decided to turn the back to them for good. Apparently there’s no money in something that’s free and where’s no money there should be no interest. What they don’t realize is that there was some money coming from the no-money business and they might “benefit” from the same effect that Microsoft has “benefited” from by doing the same Microsoft mistakes and annoying the same users. Greediness has definitely never been good.

It took me just a few seconds to search “hd trailers” on the new and come up with the beautiful which surprisingly seems to feature the same movies as the Apple site. So why then get rid of some traffic and give it to some other sites?

Build you features from the bottom up

Saturday, August 8th, 2009

This should be a very important rule of thumb for every project but unfortunately it is not. I have seen many projects for which the “man with the idea” wants to build right from the start everything that ever comes in his mind. At some point in time he will probably ask the “technical people” if any of this is at all possible but he probably won’t take the advice and perhaps even more probably those people will avoid giving them that kind of advice. Anything must be possible otherwise the “money people” go to the people who will say it’s possible – it might turn out to be a bad deal for the “money people” in the end but who cares – not the “technical people” that’s for sure; and for the “money people” it’s already too late and after all they asked for it.
So don’t ask for it. The most successful projects are built from very simple ideas and expanded with great features and sometimes even with more “sister-projects” that they wouldn’t even had dreamed of in day one. Take Google for example: they just built a search engine with virtually no “special features” at all: no complicated statistics, no fancy graphics, no image and video search and essentially no anything other than a simple but efficient search engine. Today they offer email services, application hosting, analytics and so on and so forth – you probably would know all their services better than me. On top of their services there are a number of other companies that offer services that could be offered by Google in the first place: take page thumbnails in web searches for example.
The Google story is very simple: come with a simple idea and make it real. In order to keep it real while it became very successful they had to build infrastructure and applications. Then they took this infrastructure and applications and they thought “what more can we do with it?” That’s how all their services came to life.
Take Google Analytics: they don’t show live traffic. Yet several projects I have worked for wanted their huge datasets to be aggregated in real time and in any way possible: live traffic, live statistics, live everything and for whatever keyword, field, condition the user could think of. Surely this is a great business model on paper – giving the user exactly what he wants and when he wants it. But only on paper: that’s why Google is big and those projects remain small.
Of course there are some other models for building projects and businesses. Like the Microsoft “market-oriented” model. But I’ll talk about those some other time.

Firefox 3.5 and http access control – the nightmare

Wednesday, July 8th, 2009

I updated Firefox to version 3.5 code-named Shiretoko. I then discovered that I had some problems with one of my WordPress installations. The problem was that I could not add a new category. I clicked the damn button over and over again and… nothing. It seemed that it didn’t even perform the request so I obviously went to my firebug to see if any request was made. And it was – an OPTIONS request. And that’s when the sad story began…

Some of you might not know or remember that http is more than GET and POST. Some of you might brag about HEAD, PUT or DELETE. But there’s also an OPTIONS method in the http protocol. And just to be prepared for future sad surprises like the one I just had you should also note TRACE and CONNECT. You might wonder who uses them but some day you might see that someone got an idea of putting them to hard work. So far dear old Mozilla has decided that we should remember the good old OPTIONS and got a very good idea of how to make it useful in Firefox 3.5

To continue my story I began my little research on the web trying to find out more about the OPTIONS request method and particularly about why and how Firefox has decided to make use of it in 3.5. I soon discovered one of the great new features in Firefox 3.5: http access control. If you read the document describing it you will basically find that they use some headers in conjunction with the OPTIONS method in order to provide some kind of cross-site http access control. Apparently this functionality is already standardized (not just a Mozilla innovation).

Standardized as it is it prevented my WordPress installation from working. The trouble is that Shiretoko sends an OPTIONS request together with Origin, Access-Control-Request-Method and Access-Control-Request-Headers headers. This is what they call a “preflighted request”. It expects in return the following headers: Access-Control-Allow-Origin, Access-Control-Allow-Methods, Access-Control-Allow-Headers. None of these is ever sent in my lighttpd’s response. Lighttpd just sends a 200 response with some common headers and he’s happy thinking that everything should be alright since it is a 200 after all. Well, Mozilla decided that it wasn’t.

What’s even more interesting is how I got Firefox to make such a request. This “improvement” was for cross-site requests, remember? Why in the world would adding a category in wordpress be a cross-site request? Simply because I redirect my wordpress login page to https while I keep the “official” site url to be http. So every page that has a relative path should go through https after the login while other pages apparently have absolute paths with the configured url – which is http. The difference in protocol means for Shiretoko that this is a cross-site request even if it is the same domain (no subdomain used in any of the requests). This seams a little bit excessive to me; it seems more like a bug than a feature.

After I found the mystery behind all this I started wondering. I wonder if Apache would respond well to these requests. I didn’t have time to try and frankly I don’t really want to. I was very happy with lighttpd – it gives me the lite-ness that I desperately need. I wonder if they got rid of the absolute paths in a newer version of WordPress although I doubt it. I also wonder if we really need these new “features”.

This story reminded me of the days when Internet Explorer was making the law: they came with any crazy idea they had overnight and put it in their browser. Then some MS fans quickly picked up the “feature” and used it in their website just because it was “the latest news” or “trendy” or just for some fluffy eye-candy-ness. The result was that the (very) few users that didn’t want to use Internet Explorer just couldn’t see that particular website properly. It seems to be a similar story here: we try to enable people with more functionality but in fact we disable what we already have. Standardized as it is it just doesn’t seem right. It’s true that I don’t have the latest WordPress and it’s true that I don’t have the latest lighttpd but they are both the latest versions from the latest stable branch of Debian. I don’t see why people with the latest version of Firefox shouldn’t be able to properly use my website. Debian is a very common distribution, wordpress is also very common package and lighttpd is pretty common as well. Firefox used to be common too but it just started to feel too “elite” for me. Perhaps I should go back to the good old Opera – it has never let me down.

Michael Jackson has 3 friends

Tuesday, July 7th, 2009

Has he really? At least that’s what MySpace is telling us:

Michael Jackson has 3 friends

I wonder who Tom might be since he is one of Michael’s very few friends. We also find out that there’s nothing to say “About Michael Jackson” (who is he anyway?) and that Michael has just recently became a MySpace fan: he’s been a member since 6/25/2009. He probably registered just before he died. But wait! There’s more… Everybody knows that Elvis is still alive (and he always will) but apparently Michael hasn’t died either: if you read carefully you’ll see that Michael’s last login was on 6/27/2009. I kind of always had the feeling that this would be the case. Michael couldn’t have died – at least not in our hearts.

This is the Myspace Michael Jackson Memorial Page with Javascript disabled. It’s funny how much the web has come to depend on Javascript, Flash and more eye-candy byte-consuming traffic-making technology. The web used to be a way of linking information and some images perhaps. Then it became pretty. Now the prettiness has taken over our lives: we are required to use the “pretty” technologies in order to access the services. Somehow this doesn’t seem right.

Neither does the template that conquers our lives: Michael had to fit in the Myspace template and he just didn’t fit in just right. I’m sure he has millions or maybe billions of friends apart from Tom and I’m sure that they all wish they had been more friendly to him when he was alive – when he needed them the most. We somehow forgot during the last few years about what was good about Michael Jackson and now we’re just feeding an always-hungry industry (long live Sony – and they probably will).