Details
-
Bug
-
Status: Resolved
-
Blocker
-
Resolution: Fixed
-
2.7.1
Description
Optional boolean parameters that are not provided in the URL cause the WebHDFS create file command to fail.
curl -i -X PUT "http://hadoop-primarynamenode:50070/webhdfs/v1/tmp/test1234?op=CREATE&overwrite=false"
Response:
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 15 Jul 2016 04:10:13 GMT
Date: Fri, 15 Jul 2016 04:10:13 GMT
Pragma: no-cache
Expires: Fri, 15 Jul 2016 04:10:13 GMT
Date: Fri, 15 Jul 2016 04:10:13 GMT
Pragma: no-cache
Content-Type: application/octet-stream
Location: http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false
Content-Length: 0
Server: Jetty(6.1.26)
Following the redirect:
curl -i -X PUT -T MYFILE "http://hadoop-datanode1:50075/webhdfs/v1/tmp/test1234?op=CREATE&namenoderpcaddress=hadoop-primarynamenode:8020&overwrite=false"
Response:
HTTP/1.1 100 Continue
HTTP/1.1 400 Bad Request
Content-Type: application/json; charset=utf-8
Content-Length: 162
Connection: close
{"RemoteException":{"exception":"IllegalArgumentException","javaClassName":"java.lang.IllegalArgumentException","message":"Failed to parse \"null\" to Boolean."}}
The problem can be circumvented by providing both "createparent" and "overwrite" parameters.
However, this is not possible when I have no control over the WebHDFS calls, e.g. Ambari and Hue have errors due to this.
Attachments
Attachments
- HDFS-10684.001-branch-2.patch
- 3 kB
- John Zhuge
- HDFS-10684.002-branch-2.patch
- 5 kB
- John Zhuge
- HDFS-10684.003.patch
- 4 kB
- John Zhuge
- HDFS-10684.004.patch
- 5 kB
- John Zhuge
Issue Links
- is broken by
-
HDFS-8435 Support CreateFlag in WebHdfs
- Resolved
Activity
Sorry, it was a typo in my description above. But the bug remains.
Yes, I've look at the code and providing default values should work.
However, as I've mentioned, Ambari and Hue are calling WebHDFS URLs without specifying those parameters. Hope they can pick up this fix.
loungerdork, I was not able to reproduce on 2.7.1 pseudo cluster, trunk pseudo cluster, and 2.6.0 cluster. Is it possible that your DataNode hadoop-datanode1 is running a newer version than NameNode hadoop-primarynamenode?
loungerdork, I was able to reproduce the problem with NN running 2.7.1 and DN running 2.8.
HDFS-8435 (in 2.8 but not in 2.7) introduced new parameter createparent to op CREATE.
To support mixed versions of NNs and DNs in webhdfs, we have to make sure null str is handled in XxxxxParam(final String str) constructors, either passing DEFAULT to superclass, or superclass already handling null str:
public XxxxxParam(final String str) { super(DOMAIN, DOMAIN.parse(str == null ? DEFAULT : str)); }
A quick survey between 2.7 and branch-2.8 yields the following new params:
- CreateFlagParam
- NoRedirectParam
- StartAfterParam
- CreateParentParam, not new, but its usage expanded to CREATE
No new param added between branch-2.8 and trunk.
I don't know whether there is any other case like CreateParentParam where an existing param was expanded in its usage. If anybody comes across one, please let me know or file an JIRA.
Patch 001-branch-2:
- Start with branch-2 patch because mixed version testing is only possible between 2.7 and branch-2.
- There is no unit testing due to the difficulty of mixed versions of NNs and DNs.
- Pass the JIRA test case manually between an 2.7 NN and a branch-2 DN.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 28s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
-1 | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. |
+1 | mvninstall | 7m 14s | branch-2 passed |
+1 | compile | 0m 28s | branch-2 passed with JDK v1.8.0_101 |
+1 | compile | 0m 29s | branch-2 passed with JDK v1.7.0_111 |
+1 | checkstyle | 0m 17s | branch-2 passed |
+1 | mvnsite | 0m 35s | branch-2 passed |
+1 | mvneclipse | 0m 14s | branch-2 passed |
+1 | findbugs | 1m 40s | branch-2 passed |
+1 | javadoc | 0m 20s | branch-2 passed with JDK v1.8.0_101 |
+1 | javadoc | 0m 23s | branch-2 passed with JDK v1.7.0_111 |
+1 | mvninstall | 0m 31s | the patch passed |
+1 | compile | 0m 25s | the patch passed with JDK v1.8.0_101 |
+1 | javac | 0m 25s | the patch passed |
+1 | compile | 0m 28s | the patch passed with JDK v1.7.0_111 |
+1 | javac | 0m 28s | the patch passed |
+1 | checkstyle | 0m 14s | the patch passed |
+1 | mvnsite | 0m 32s | the patch passed |
+1 | mvneclipse | 0m 12s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | findbugs | 1m 54s | the patch passed |
+1 | javadoc | 0m 20s | the patch passed with JDK v1.8.0_101 |
+1 | javadoc | 0m 22s | the patch passed with JDK v1.7.0_111 |
+1 | unit | 0m 59s | hadoop-hdfs-client in the patch passed with JDK v1.7.0_111. |
+1 | asflicense | 0m 18s | The patch does not generate ASF License warnings. |
21m 40s |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:b59b8b7 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826661/HDFS-10684.001-branch-2.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle |
uname | Linux 896593d54f21 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | branch-2 / 34f9330 |
Default Java | 1.7.0_111 |
Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_101 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
findbugs | v3.0.0 |
JDK v1.7.0_111 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/16607/testReport/ |
modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/16607/console |
Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Thanks for picking this up John, the patch looks good. Great find here also loungerdork, thanks for the detailed report.
One question, how can we add testing to ensure this type of compatibility for the future? I assume this gap is because when these parameters were added, WebHDFSFileSystem was also modified to always send them, and thus was not testing the case where the params were unspecified.
I also thought HttpFSFileSystem as an alternate WebHDFS client implementation would help catch this. Maybe a test that points HttpFSFileSystem directly at a WebHDFS endpoint?
Thanks andrew.wang for the comment. I am looking into unit testing to ensure compatibility.
Hi, this doesn't sounds like a blocker for our 2.8 release.
Can some HDFS guy here to comment if this issue belongs to a blocker or it is just something nice to fix?
I think this is still a blocker, it was found via downstream testing with Hue and Ambari.
I see. Thanks for update, John. I would like to understand the importance and difficulty (for adding a UT) here for tracking 2.8 release - from discussion above, it sounds like we want a unit test to make sure if future work break compatibility or not. I agree that work is important but not a blocker for release. So, if we couldn't deliver UT in short term for some reason, I am OK with splitting UT work into a separated JIRA for tracking later. andrew.wang, what do you think?
A basic unit test should be easy, since it's just calling a REST API without all the parameters set.
I guess if someone can verify that this works via manual testing, we can put it in, but considering that it's just calling a REST API, this really should come with a unit test.
Sounds good, andrew.wang. I will get on it right away. TestWebHdfsFileSystemContract#testResponseCode seems to have some code I can leverage.
Patch 002:
- Add a unit test. Without the fix, it fails with the same reported error 400 Bad Request.
Tracked down the issue I think caused this, which added support for these params in the first place.
+1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 13m 21s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
0 | mvndep | 0m 9s | Maven dependency ordering for branch |
+1 | mvninstall | 8m 44s | branch-2 passed |
+1 | compile | 1m 13s | branch-2 passed with JDK v1.8.0_111 |
+1 | compile | 1m 22s | branch-2 passed with JDK v1.7.0_121 |
+1 | checkstyle | 0m 35s | branch-2 passed |
+1 | mvnsite | 1m 26s | branch-2 passed |
+1 | mvneclipse | 0m 29s | branch-2 passed |
+1 | findbugs | 3m 33s | branch-2 passed |
+1 | javadoc | 1m 12s | branch-2 passed with JDK v1.8.0_111 |
+1 | javadoc | 1m 58s | branch-2 passed with JDK v1.7.0_121 |
0 | mvndep | 0m 8s | Maven dependency ordering for patch |
+1 | mvninstall | 1m 12s | the patch passed |
+1 | compile | 1m 12s | the patch passed with JDK v1.8.0_111 |
+1 | javac | 1m 12s | the patch passed |
+1 | compile | 1m 21s | the patch passed with JDK v1.7.0_121 |
+1 | javac | 1m 21s | the patch passed |
-0 | checkstyle | 0m 30s | hadoop-hdfs-project: The patch generated 2 new + 37 unchanged - 0 fixed = 39 total (was 37) |
+1 | mvnsite | 1m 22s | the patch passed |
+1 | mvneclipse | 0m 23s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | findbugs | 3m 57s | the patch passed |
+1 | javadoc | 1m 8s | the patch passed with JDK v1.8.0_111 |
+1 | javadoc | 1m 55s | the patch passed with JDK v1.7.0_121 |
+1 | unit | 0m 59s | hadoop-hdfs-client in the patch passed with JDK v1.7.0_121. |
+1 | unit | 50m 52s | hadoop-hdfs in the patch passed with JDK v1.7.0_121. |
+1 | asflicense | 0m 21s | The patch does not generate ASF License warnings. |
152m 46s |
Reason | Tests |
---|---|
JDK v1.8.0_111 Failed junit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:b59b8b7 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842633/HDFS-10684.002-branch-2.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle |
uname | Linux 7651999005de 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | branch-2 / 292bd78 |
Default Java | 1.7.0_121 |
Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_111 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_121 |
findbugs | v3.0.0 |
checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17816/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt |
JDK v1.7.0_121 Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17816/testReport/ |
modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17816/console |
Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Patch 003:
- Expand the unit test to cover all 3 DN CREATE parameters: createflag, createparent, and overwrite.
- Discover additional fix needed for OverwriteParam
- Discover the fix for CreateFlagParam not necessary because it is string and default value is empty.
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 12s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
0 | mvndep | 0m 28s | Maven dependency ordering for branch |
+1 | mvninstall | 7m 5s | trunk passed |
+1 | compile | 1m 20s | trunk passed |
+1 | checkstyle | 0m 30s | trunk passed |
+1 | mvnsite | 1m 26s | trunk passed |
+1 | mvneclipse | 0m 25s | trunk passed |
+1 | findbugs | 3m 6s | trunk passed |
+1 | javadoc | 0m 58s | trunk passed |
0 | mvndep | 0m 7s | Maven dependency ordering for patch |
+1 | mvninstall | 1m 14s | the patch passed |
+1 | compile | 1m 17s | the patch passed |
+1 | javac | 1m 17s | the patch passed |
-0 | checkstyle | 0m 28s | hadoop-hdfs-project: The patch generated 1 new + 37 unchanged - 0 fixed = 38 total (was 37) |
+1 | mvnsite | 1m 21s | the patch passed |
+1 | mvneclipse | 0m 20s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | findbugs | 3m 15s | the patch passed |
+1 | javadoc | 0m 54s | the patch passed |
+1 | unit | 0m 53s | hadoop-hdfs-client in the patch passed. |
-1 | unit | 73m 33s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 18s | The patch does not generate ASF License warnings. |
100m 31s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
hadoop.hdfs.TestSecureEncryptionZoneWithKMS |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:a9ad5d6 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842756/HDFS-10684.003.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle |
uname | Linux b504d23954b1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 4c38f11 |
Default Java | 1.8.0_111 |
findbugs | v3.0.0 |
checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17832/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17832/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17832/testReport/ |
modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17832/console |
Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Patch 004:
- Move added tests into a new method testDatanodeCreateMissingParameter to avoid checkstyle error
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 18s | Docker mode activated. |
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. |
0 | mvndep | 0m 7s | Maven dependency ordering for branch |
+1 | mvninstall | 8m 37s | trunk passed |
+1 | compile | 1m 30s | trunk passed |
+1 | checkstyle | 0m 33s | trunk passed |
+1 | mvnsite | 1m 43s | trunk passed |
+1 | mvneclipse | 0m 25s | trunk passed |
+1 | findbugs | 3m 26s | trunk passed |
+1 | javadoc | 1m 6s | trunk passed |
0 | mvndep | 0m 7s | Maven dependency ordering for patch |
+1 | mvninstall | 1m 33s | the patch passed |
+1 | compile | 1m 40s | the patch passed |
+1 | javac | 1m 40s | the patch passed |
+1 | checkstyle | 0m 31s | the patch passed |
+1 | mvnsite | 1m 42s | the patch passed |
+1 | mvneclipse | 0m 25s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | findbugs | 3m 37s | the patch passed |
+1 | javadoc | 0m 59s | the patch passed |
+1 | unit | 0m 55s | hadoop-hdfs-client in the patch passed. |
-1 | unit | 92m 5s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 22s | The patch does not generate ASF License warnings. |
123m 7s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
hadoop.hdfs.TestSecureEncryptionZoneWithKMS | |
hadoop.hdfs.TestTrashWithSecureEncryptionZones |
Subsystem | Report/Notes |
---|---|
Docker | Image:yetus/hadoop:a9ad5d6 |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12842820/HDFS-10684.004.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle |
uname | Linux 5068d1679936 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
git revision | trunk / f66f618 |
Default Java | 1.8.0_111 |
findbugs | v3.0.0 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17839/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17839/testReport/ |
modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17839/console |
Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
+1 LGTM, committed to trunk, branch-2, branch-2.8. Thanks for the contribution John!
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10993 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10993/)
HDFS-10684. WebHDFS DataNode calls fail without parameter createparent. (wang: rev fbdbbd57cdc3d8c778fca9266a7cadf298c8ff6c)
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
- (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/CreateParentParam.java
- (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/OverwriteParam.java
For some reason the added test failed in a precommit build (branch-2). But it doesn't fail in my local tree nonetheless.
TestWebHdfsFileSystemContract on branch-2 passed locally on Centos 7.2 for me.
Hi loungerdork, thanks for reporting the problem with great details.
Following the redirect, you didn't use the exact DataNode URL printed by NameNode TEMPORARY_REDIRECT response, as specified by Section "Create and Write to a File" Step 2 in https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html.
I am leaning towards closing this jira as "Invalid", however there seems to be a simple fix to handle this kind of user error by changing
to
Waiting for more feedbacks from the community.