Details
Description
HDFS-12386 added getserverdefaults call to webhdfs (this method is used by HDFS-12396), and expect clusters that don't support this to throw UnsupportedOperationException. However, we are seeing
hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true -m 30 -pb -update -skipcrccheck webhdfs://<NN1>:<webhdfsPort>/fileX hdfs://<NN2>:8020/scale1/fileY ... 18/01/05 10:57:33 ERROR tools.DistCp: Exception encountered org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): Invalid value for webhdfs parameter "op": No enum constant org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETSERVERDEFAULTS at org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:80) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:498) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:126) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:765) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:606) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:637) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:633) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getServerDefaults(WebHdfsFileSystem.java:1807) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProviderUri(WebHdfsFileSystem.java:1825) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getKeyProvider(WebHdfsFileSystem.java:1836) at org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:72) at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.addDelegationTokens(WebHdfsFileSystem.java:1627) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:139) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80) at org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:199) at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85) at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:89) at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86) at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:368) at org.apache.hadoop.tools.DistCp.prepareFileListing(DistCp.java:96) at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:205) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:182) at org.apache.hadoop.tools.DistCp.run(DistCp.java:153) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
We either need to make the server throw UnsupportedOperationException, or make the client to handle IllegalArgumentException. For backward compatibility and easier operation in the field, the latter is preferred.
But we'd better understand why IllegalArgumentException is thrown instead of UnsupportedOperationException is thrown.
The correct way to do is: check if the operation is supported, and throw the UnsurportedOperationExcetion if not; then check if parameter is legal, throw IllegalArgumentException is it's not legal. We can do that fix as follow-up of this jira.
Attachments
Attachments
Issue Links
- is blocked by
-
HDFS-12396 Webhdfs file system should get delegation token from kms provider.
- Resolved