Few of my friends new to Hadoop ask frequently what the good file size is for Hadoop and how to decide file size. Obviously it should not be small size and file size should be as per the block size. HDFS is really not designed for many small files.

For each file, the client has to talk to the namenode, which gives it the location(s) of the block(s) of the file, and then the client streams the data from the datanode. Now, in the best case, the client does this once, and then finds that it is the machine with the data on it, and can read it directly from disk. This will be fast: comparable to direct disk reads. If it’s not the machine that has the data on it, then it must stream the data over the network. Then you are bound by network I/O speeds, which shouldn’t be terrible, but still a bit slower than direct disk read. However, you’re getting an even worse case- where the overhead of talking to the namenode becomes significant. With only 1KB files, you are getting to the point where you’re exchanging just as much metadata as actual data. The client has to make two separate network exchanges to get the data from each file. Add to this that the namenode is probably getting hammered by all of these different threads and so it might become a bottleneck.

If we use HDFS for something it’s not designed to be used for, it’s going to be slow. Merge small files, and use MapReduce to get data locality, and we’ll have much better performance. In fact, because we’ll be able to take better advantage of sequential disk reads, I wouldn’t be surprised if reading from one big HDFS file was even¬†faster¬†than reading many small local files.

Another difference between HDFS and other filesystems is that it keeps disk i/o as less as possible by storing data in larger blocks (normally 64M or 128M) as compared to traditional FS where FS block size is in the order of KBs. for that reason they always say that HDFS is good at processing few large files rather than large no of small files, the reason behind this is the fact that, although there have been significant advancements in components like cpu, ram etc in recent times, the disk i/o is an area where we are still not that much advance. This was the intention behind having so huge blocks (unlike traditional FS) and keep the usage of disk as less as possible.

Moreover if the block size is too small, we will have a greater no of blocks which means more metadata. this may again degrade the performance, as more amount of information needs to loaded into the memory. For each block, which is considered an object in HDFS has about 200B of metadata associated with it. If we have many small blocks, it’ll just increase the metadata and we might end up with RAM issues.

Leave a Reply

Your email address will not be published. Required fields are marked *