Description
Starting with Hadoop-0.17, most of the network I/O uses non-blocking NIO channels. Normal blocking reads and writes are handled by Hadoop and use our own cache of selectors. This cache suites well for Hadoop where I/O often occurs on many short lived threads. Number of fds consumed is proportional to number of threads currently blocked.
If blocking I/O is done using java.*, Sun's implementation uses internal per-thread selectors. These selectors are closed using sun.misc.Cleaner. Looks like this cleaning is kind of like finalizers and tied to GC. This is pretty ill suited if we have many threads that are short lived. Until GC happens, number of these selectors keeps growing. Each selector consumes 3 fds.
Though blocking read and write are handled by Hadoop, connect() is still the default implementation that uses per-thread selector.
Koji helped a lot in tracking this. Some sections from 'jmap' output and other info Koji collected led to this suspicion and will include that in the next comment.
One solution might be to handle connect() also in Hadoop using our selectors.