Hello, Jagane Sundar.
HDFS-3990 made a code change to reject registration of datanodes for which the hostname corresponding to the IP address could not be resolved. The reason for this change is that allowing registration of a datanode with an unresolved hostname caused performance problems later. The namenode web UI's list of live datanodes would attempt a reverse DNS lookup for every unresolved data node. For a cluster of thousands of nodes, the massive number of reverse DNS lookups would cause a performance problem.
HDFS-4269, specifically covers some fallout of the HDFS-3990 patch that we observed on Windows. For Windows, a reverse DNS lookup of 127.0.0.1 does not resolve to localhost. AFAIK, Windows is the only OS that has this behavior. Since usage of localhost is common in the test suites, we added an escape hatch for 127.0.0.1 to keep the test suites passing on Windows.
You mentioned the specific case of running a VM with a DHCP-assigned address and bridged networking. You're right that you need to arrange for the reverse DNS lookup to work. When I've run this kind of setup, I've done it in one of two ways:
- Edit /etc/hosts to map 192.168.1.94 (or any other DHCP-assigned address) to a specific hostname.
- Run my own local name server to do the same.
In both of these cases, a VM reboot may invalidate your configuration (either the /etc/hosts entry or the A record in your local name server) by getting a different DHCP-assigned address. In theory, you could write some automation to update /etc/hosts or the local name server with your current DHCP-assigned address on boot. In practice, I've personally never bothered to take it this far, because my VMs change their IP addresses relatively infrequently, and a manual update isn't too cumbersome.
Unfortunately, I don't think there is anything we can do in the Hadoop code itself to simplify this. I can't think of a reliable way to detect the difference between an unresolvable IP address and a "testing" IP address, for which name resolution isn't important.