Details
-
Bug
-
Status: Closed
-
Minor
-
Resolution: Fixed
-
0.19.1
-
None
-
None
-
AIX 6.1 or Solaris
-
Accessing HDFS with any ip, hostname, or proxy should work as long as it points to the interface NameNode is listening on.
Description
After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.
fs.default.name=hdfs://p520aix61.mydomain.com:9000
Hostname for box is p520aix and the IP is 10.120.16.68
If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000" is used.
Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata, expected: hdfs://p520aix61.mydomain.com:9000
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
at TestHadoopHDFS.run(TestHadoopHDFS.java:116)
Attachments
Attachments
Issue Links
- relates to
-
MAPREDUCE-438 When connecting to HDFS using an IP Task MapRed gets confused when checking the output path.
- Open