Description
When the host provides duplicate user or group names, NFS will not start and print errors like the following:
... ... 13/11/25 18:11:52 INFO nfs3.Nfs3Base: registered UNIX signal handlers for [TERM, HUP, INT] Exception in thread "main" java.lang.IllegalArgumentException: value already present: s-iss at com.google.common.base.Preconditions.checkArgument(Preconditions.java:115) at com.google.common.collect.AbstractBiMap.putInBothMaps(AbstractBiMap.java:112) at com.google.common.collect.AbstractBiMap.put(AbstractBiMap.java:96) at com.google.common.collect.HashBiMap.put(HashBiMap.java:85) at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:85) at org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:110) at org.apache.hadoop.nfs.nfs3.IdUserGroup.<init>(IdUserGroup.java:54) at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:172) at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.<init>(RpcProgramNfs3.java:164) at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.<init>(Nfs3.java:41) at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:52) 13/11/25 18:11:54 INFO nfs3.Nfs3Base: SHUTDOWN_MSG: ... ...
The reason NFS should not start is that, HDFS (non-kerberos cluster) uses name as the only way to identify a user. On some linux box, it could have two users with the same name but different user IDs. Linux might be able to work fine with that most of the time. However, when NFS gateway talks to HDFS, HDFS accepts only user name. That is, from HDFS' point of view, these two different users are the same user even though they are different on the Linux box.
The duplicate names on Linux systems sometimes is because of some legacy system configurations, or combined name services.
Regardless, NFS gateway should print some help information so the user can understand the error and the remove the duplicated names before NFS restart.