Description
There are a few unit tests that rely on hard-coded MiniDFSCluster data dir names.
- TestDataNodeVolumeFailureToleration
- TestDataNodeVolumeFailureReporting
- TestDiskBalancerCommand
- TestBlockStatsMXBean
- TestDataNodeVolumeMetrics
- TestDFSAdmin
- TestDataNodeHotSwapVolumes
- TestDataNodeVolumeFailure
This ticket is opened to use
MiniDFSCluster#getInstanceStorageDir(0, 1); instead of like below new File(cluster.getDataDirectory(), "data1");
Attachments
Attachments
- HDFS-13251.003.addendum.patch
- 2 kB
- Ajay Kumar
- HDFS-13251.003.patch
- 17 kB
- Ajay Kumar
- HDFS-13251.002.patch
- 20 kB
- Ajay Kumar
- HDFS-13251.001.patch
- 19 kB
- Ajay Kumar
- HDFS-13251.000.patch
- 18 kB
- Ajay Kumar
Activity
Thanks ajayydv for working on this. Here are some comments on patch 001. Overall, we have the convention that data1->dir idx 0, data2 -> dir idx 1, ... Most of the comments below are about to keep this mapping consistent.
TestDataNodeHotSwapVolumes.java
Line 245: change is not needed.
Line 696: Can you elaborate on the change of directory index to i + 2 from i
Line 1038: should this be cluster.getIntancesStorageDir(0,0)?
TestDataNodeVolumeFailure.java
Line 171: this should be cluster.getInstanceStorageDir(1, 0);
Line 237: this should be cluster.getInstanceStorageDir(0, 0);
Line 500: this does not need to be changed. Data5 is a bad location so does not need to go through the getInstanceStorageDir() call.
TestDataNodeVolumeFailureToleration.java
Line 53/81: unused dataDir can be removed.
Line 112: dataDir1 should be cluster.getInstanceStorageDir(0, 0);
Line 115: dataDir2 should be cluster.getInstanceStorageDir(0, 1);
Line 163: dn2vol1 should be cluster.getInstanceStorageDir(1, 0);
TestDFSAdmin.java
Line 358: should we use newDir.getAbsolutePath() here instead of the hard-coded data5?
Line 353-357: we should eliminate the usage of data1, data2, etc with cluster.getInstanceStorageDir(0,0).getAbsolutePath() and cluster.getInstanceStorageDir(0,1).getAbsolutePath(), respectively.
xyao, thanks for review. patch v2 to address your suggestions.
TestDataNodeHotSwapVolumes.java
Line 696: Can you elaborate on the change of directory index to i + 2 from i
Test is adding new dirs. (we already have 0 and 1, so test succeeds with i+2 but not with i+0, i+1)
Line 1038: should this be cluster.getIntancesStorageDir(0,0)?
Both 0,0 and 0,1 works as we want to keep one of the existing dirs, Changed it to 0,0 in new patch.
+1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 39s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 8 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 18m 47s | trunk passed |
+1 | compile | 1m 0s | trunk passed |
+1 | checkstyle | 0m 54s | trunk passed |
+1 | mvnsite | 1m 12s | trunk passed |
+1 | shadedclient | 11m 59s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 11s | trunk passed |
+1 | javadoc | 1m 7s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 15s | the patch passed |
+1 | compile | 1m 0s | the patch passed |
+1 | javac | 1m 0s | the patch passed |
-0 | checkstyle | 0m 52s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 11 new + 100 unchanged - 0 fixed = 111 total (was 100) |
+1 | mvnsite | 1m 5s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 11m 4s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 10s | the patch passed |
+1 | javadoc | 0m 53s | the patch passed |
Other Tests | |||
+1 | unit | 115m 41s | hadoop-hdfs in the patch passed. |
+1 | asflicense | 0m 23s | The patch does not generate ASF License warnings. |
171m 49s |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914203/HDFS-13251.002.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
uname | Linux ddd62af6af49 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 7fab787 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/testReport/ |
Max. process+thread count | 3052 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23441/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
ajayydv , thanks for the update. The new patch looks pretty good to me. Just have a few more comments:
TestDataNodeHotSwapVolumes.java
Line 698/715/716 changes can be skipped
As long as it does not change the standard MiniDFSCluster locations, Let's just keep it as-is.
TestDataNodeVolumeFailureReporting.java
Line 81/780: unused dataDir can be removed
TestDataNodeVolumeFailureToleration.java
testValidVolumesAtStartup() change can be skipped. (non-standard locations)
TestDFSAdmin.java
testDataNodeGetReconfigurationStatus() change can be skipped. (non-standard locations)
Line 322-329, suggested changes:
File dnDir0 = cluster.getInstanceStorageDir(0, 0);
File dnDir1 = cluster.getInstanceStorageDir(0, 1);
assertThat(outs.get(offset + 1), is(allOf(containsString("From:"),
containsString(dnDir0.getName()),
containsString(dnDir1.getName()))));
assertThat(outs.get(offset + 2),
is(not(anyOf(containsString(dnDir0.getName()),
containsString(dnDir1.getName())))));
Patch v3 to adress xyao suggestions.
Removed below changes as suggested :
- TestDataNodeHotSwapVolumes.java Line 698/715/716
- TestDataNodeVolumeFailureReporting Line 81/780
- TestDataNodeVolumeFailureToleration#testValidVolumesAtStartup()
- TestDFSAdmin#testDataNodeGetReconfigurationStatus()
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 53s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 8 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 15m 30s | trunk passed |
+1 | compile | 0m 53s | trunk passed |
+1 | checkstyle | 0m 49s | trunk passed |
+1 | mvnsite | 0m 57s | trunk passed |
+1 | shadedclient | 10m 56s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 47s | trunk passed |
+1 | javadoc | 0m 54s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 0m 55s | the patch passed |
+1 | compile | 0m 48s | the patch passed |
+1 | javac | 0m 48s | the patch passed |
+1 | checkstyle | 0m 39s | the patch passed |
+1 | mvnsite | 0m 53s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 9m 56s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 1m 54s | the patch passed |
+1 | javadoc | 0m 53s | the patch passed |
Other Tests | |||
-1 | unit | 127m 13s | hadoop-hdfs in the patch failed. |
+1 | asflicense | 0m 23s | The patch does not generate ASF License warnings. |
175m 55s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
hadoop.hdfs.web.TestWebHdfsTimeouts | |
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | |
hadoop.hdfs.TestReconstructStripedFile |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914358/HDFS-13251.003.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
uname | Linux 94aa942c8db8 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 9d6994d |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23462/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23462/testReport/ |
Max. process+thread count | 3663 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23462/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
test failures are unrelated. TestDataNodeVolumeFailureReporting failure is due to timeout. Will see if we can improve it, will track it via separate jira.
Thanks ajayydv for the contribution. I've committed the patch to the trunk and branch-3.1.
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13837 (See https://builds.apache.org/job/Hadoop-trunk-Commit/13837/)
HDFS-13251. Avoid using hard coded datanode data dirs in unit tests. (xyao: rev f83716b7f2e5b63e4c2302c374982755233d4dd6)
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeMetrics.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/command/TestDiskBalancerCommand.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockStatsMXBean.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
xyao, thanks for commit. Added addendum patch with below 2 changes:
- TestDataNodeVolumeFailureToleration: line 162 changed getStorageDir to getInstanceStorageDir
- TestDFSAdmin: formatting
-1 overall |
Vote | Subsystem | Runtime | Comment |
---|---|---|---|
0 | reexec | 0m 26s | Docker mode activated. |
Prechecks | |||
+1 | @author | 0m 0s | The patch does not contain any @author tags. |
+1 | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. |
trunk Compile Tests | |||
+1 | mvninstall | 16m 29s | trunk passed |
+1 | compile | 1m 3s | trunk passed |
+1 | checkstyle | 0m 44s | trunk passed |
+1 | mvnsite | 1m 6s | trunk passed |
+1 | shadedclient | 10m 41s | branch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 5s | trunk passed |
+1 | javadoc | 0m 57s | trunk passed |
Patch Compile Tests | |||
+1 | mvninstall | 1m 10s | the patch passed |
+1 | compile | 1m 1s | the patch passed |
+1 | javac | 1m 1s | the patch passed |
+1 | checkstyle | 0m 45s | the patch passed |
+1 | mvnsite | 1m 6s | the patch passed |
+1 | whitespace | 0m 0s | The patch has no whitespace issues. |
+1 | shadedclient | 10m 14s | patch has no errors when building and testing our client artifacts. |
+1 | findbugs | 2m 7s | the patch passed |
+1 | javadoc | 0m 54s | the patch passed |
Other Tests | |||
-1 | unit | 115m 19s | hadoop-hdfs in the patch failed. |
-1 | asflicense | 0m 18s | The patch generated 3 ASF License warnings. |
166m 14s |
Reason | Tests |
---|---|
Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
hadoop.hdfs.web.TestWebHdfsTimeouts | |
hadoop.hdfs.TestReadStripedFileWithMissingBlocks | |
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
Subsystem | Report/Notes |
---|---|
Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
JIRA Issue | |
JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914629/HDFS-13251.003.addendum.patch |
Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
uname | Linux fec6118a789d 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Build tool | maven |
Personality | /testptch/patchprocess/precommit/personality/provided.sh |
git revision | trunk / 5ff22d4 |
maven | version: Apache Maven 3.3.9 |
Default Java | 1.8.0_151 |
findbugs | v3.1.0-RC1 |
unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23498/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23498/testReport/ |
asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/23498/artifact/out/patch-asflicense-problems.txt |
Max. process+thread count | 3300 (vs. ulimit of 10000) |
modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23498/console |
Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
Thanks ajayydv for the addendum patch. I've committed it to the trunk and branch-3.1.
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13844 (See https://builds.apache.org/job/Hadoop-trunk-Commit/13844/)
HDFS-13251. Avoid using hard coded datanode data dirs in unit (xyao: rev da777a5498e73f9a44e810dc6771e5c8fe37b6f6)
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
- (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
xyao, Don't find its usage in TestDiskBalancerCommand.