Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10958

Add instrumentation hooks around Datanode disk IO

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha2
    • Component/s: datanode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      This ticket is opened to add instrumentation hooks around Datanode disk IO based on refactor work from HDFS-10930.

      1. HDFS-10958.01.patch
        124 kB
        Arpit Agarwal
      2. HDFS-10958.02.patch
        132 kB
        Arpit Agarwal
      3. HDFS-10958.03.patch
        133 kB
        Arpit Agarwal
      4. HDFS-10958.04.patch
        133 kB
        Arpit Agarwal
      5. HDFS-10958.05.patch
        137 kB
        Arpit Agarwal
      6. HDFS-10958.06.patch
        137 kB
        Arpit Agarwal

        Issue Links

          Activity

          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Attached a patch to factor out most of the DataNode file IO dependencies into a FileIoProvider class for easier testing/instrumentation. IO can be hooked by implementing the new FileIoEvents interface. The default hooks do nothing. A couple of classes don't use FileIoProvider yet (DataStorage and BlockPoolSliceStorage).

          The patch aims to introduce no behavioral changes. Thanks to Xiaoyu Yao for reviewing an early iteration of this large patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Attached a patch to factor out most of the DataNode file IO dependencies into a FileIoProvider class for easier testing/instrumentation. IO can be hooked by implementing the new FileIoEvents interface. The default hooks do nothing. A couple of classes don't use FileIoProvider yet (DataStorage and BlockPoolSliceStorage). The patch aims to introduce no behavioral changes. Thanks to Xiaoyu Yao for reviewing an early iteration of this large patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
          0 mvndep 1m 39s Maven dependency ordering for branch
          +1 mvninstall 8m 32s trunk passed
          +1 compile 9m 47s trunk passed
          +1 checkstyle 1m 47s trunk passed
          +1 mvnsite 2m 37s trunk passed
          +1 mvneclipse 0m 54s trunk passed
          +1 findbugs 4m 41s trunk passed
          +1 javadoc 1m 57s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 51s the patch passed
          +1 compile 9m 11s the patch passed
          +1 javac 9m 11s the patch passed
          -0 checkstyle 1m 47s root: The patch generated 17 new + 1144 unchanged - 10 fixed = 1161 total (was 1154)
          +1 mvnsite 2m 37s the patch passed
          +1 mvneclipse 0m 54s the patch passed
          -1 whitespace 0m 0s The patch has 22 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          -1 findbugs 1m 55s hadoop-hdfs-project/hadoop-hdfs generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)
          -1 javadoc 0m 45s hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7)
          +1 unit 8m 23s hadoop-common in the patch passed.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed.
          -1 unit 61m 24s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          128m 30s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 707] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 699] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 716] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 699] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 780] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 771] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 789] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 771] is not discharged
          Failed junit tests hadoop.hdfs.TestReplication
            hadoop.hdfs.TestRead
            hadoop.hdfs.TestPread
            hadoop.hdfs.tools.TestDebugAdmin
            hadoop.hdfs.TestSetrepIncreasing
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
            hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.TestDecommission
            hadoop.hdfs.TestFileAppend
            hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
            hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
            hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
            hadoop.hdfs.server.balancer.TestBalancer
            hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
            hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList
            hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
            hadoop.hdfs.TestFileCreation
            hadoop.hdfs.TestDataTransferKeepalive
            hadoop.tools.TestHdfsConfigFields
            hadoop.hdfs.TestSmallBlock
            hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
            hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
            hadoop.hdfs.server.namenode.TestFileLimit
            hadoop.hdfs.server.namenode.TestFsck
            hadoop.hdfs.TestInjectionForSimulatedStorage



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842647/HDFS-10958.01.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 596b663a09dc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 92a8917
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
          javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17818/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17818/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 1m 39s Maven dependency ordering for branch +1 mvninstall 8m 32s trunk passed +1 compile 9m 47s trunk passed +1 checkstyle 1m 47s trunk passed +1 mvnsite 2m 37s trunk passed +1 mvneclipse 0m 54s trunk passed +1 findbugs 4m 41s trunk passed +1 javadoc 1m 57s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 51s the patch passed +1 compile 9m 11s the patch passed +1 javac 9m 11s the patch passed -0 checkstyle 1m 47s root: The patch generated 17 new + 1144 unchanged - 10 fixed = 1161 total (was 1154) +1 mvnsite 2m 37s the patch passed +1 mvneclipse 0m 54s the patch passed -1 whitespace 0m 0s The patch has 22 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. -1 findbugs 1m 55s hadoop-hdfs-project/hadoop-hdfs generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) -1 javadoc 0m 45s hadoop-hdfs-project_hadoop-hdfs generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) +1 unit 8m 23s hadoop-common in the patch passed. +1 unit 1m 0s hadoop-hdfs-client in the patch passed. -1 unit 61m 24s hadoop-hdfs in the patch failed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 128m 30s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 707] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 699] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 716] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 699] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 780] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 771] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 789] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 771] is not discharged Failed junit tests hadoop.hdfs.TestReplication   hadoop.hdfs.TestRead   hadoop.hdfs.TestPread   hadoop.hdfs.tools.TestDebugAdmin   hadoop.hdfs.TestSetrepIncreasing   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup   hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.TestDecommission   hadoop.hdfs.TestFileAppend   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes   hadoop.hdfs.server.datanode.TestReadOnlySharedStorage   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer   hadoop.hdfs.server.balancer.TestBalancer   hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes   hadoop.hdfs.TestFileCreation   hadoop.hdfs.TestDataTransferKeepalive   hadoop.tools.TestHdfsConfigFields   hadoop.hdfs.TestSmallBlock   hadoop.hdfs.TestWriteBlockGetsBlockLengthHint   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped   hadoop.hdfs.server.namenode.TestFileLimit   hadoop.hdfs.server.namenode.TestFsck   hadoop.hdfs.TestInjectionForSimulatedStorage Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842647/HDFS-10958.01.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 596b663a09dc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 92a8917 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html javadoc https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17818/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17818/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17818/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          v02 patch fix Jenkins issues. The test failures were all test-only issues. The Findbugs issues are all false positives, attempted to suppress them.

          Show
          arpitagarwal Arpit Agarwal added a comment - v02 patch fix Jenkins issues. The test failures were all test-only issues. The Findbugs issues are all false positives, attempted to suppress them.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 12s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 41s Maven dependency ordering for branch
          +1 mvninstall 6m 51s trunk passed
          +1 compile 9m 34s trunk passed
          +1 checkstyle 1m 48s trunk passed
          +1 mvnsite 2m 39s trunk passed
          +1 mvneclipse 0m 55s trunk passed
          +1 findbugs 4m 40s trunk passed
          +1 javadoc 1m 56s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 52s the patch passed
          +1 compile 9m 14s the patch passed
          +1 javac 9m 14s the patch passed
          -0 checkstyle 1m 47s root: The patch generated 6 new + 1158 unchanged - 10 fixed = 1164 total (was 1168)
          +1 mvnsite 2m 35s the patch passed
          +1 mvneclipse 0m 54s the patch passed
          -1 whitespace 0m 0s The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          -1 findbugs 1m 57s hadoop-hdfs-project/hadoop-hdfs generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0)
          +1 javadoc 1m 55s the patch passed
          -1 unit 7m 54s hadoop-common in the patch failed.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed.
          -1 unit 63m 25s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 32s The patch does not generate ASF License warnings.
          128m 6s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 723] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 714] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 733] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:[line 714] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 799] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 789] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 809] is not discharged
            new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:[line 789] is not discharged
            Redundant nullcheck of org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.fileIoProvider, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build() Redundant null check at FsVolumeImplBuilder.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build() Redundant null check at FsVolumeImplBuilder.java:[line 74]
          Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.hdfs.TestTrashWithSecureEncryptionZones
            hadoop.hdfs.TestDataTransferKeepalive
            hadoop.hdfs.server.namenode.TestFsck
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842678/HDFS-10958.02.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 2d791cbf363f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4c38f11
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17821/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17821/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 12s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 41s Maven dependency ordering for branch +1 mvninstall 6m 51s trunk passed +1 compile 9m 34s trunk passed +1 checkstyle 1m 48s trunk passed +1 mvnsite 2m 39s trunk passed +1 mvneclipse 0m 55s trunk passed +1 findbugs 4m 40s trunk passed +1 javadoc 1m 56s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 52s the patch passed +1 compile 9m 14s the patch passed +1 javac 9m 14s the patch passed -0 checkstyle 1m 47s root: The patch generated 6 new + 1158 unchanged - 10 fixed = 1164 total (was 1168) +1 mvnsite 2m 35s the patch passed +1 mvneclipse 0m 54s the patch passed -1 whitespace 0m 0s The patch has 5 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. -1 findbugs 1m 57s hadoop-hdfs-project/hadoop-hdfs generated 9 new + 0 unchanged - 0 fixed = 9 total (was 0) +1 javadoc 1m 55s the patch passed -1 unit 7m 54s hadoop-common in the patch failed. +1 unit 1m 1s hadoop-hdfs-client in the patch passed. -1 unit 63m 25s hadoop-hdfs in the patch failed. +1 asflicense 0m 32s The patch does not generate ASF License warnings. 128m 6s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 723] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, File, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 714] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 733] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.InputStream Obligation to clean up resource created at FileIoProvider.java: [line 714] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 799] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, File, boolean, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 789] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 809] is not discharged   new org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileOutputStream(FileIoProvider, FsVolumeSpi, FileDescriptor, FileIoProvider$1) may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java:may fail to clean up java.io.OutputStream Obligation to clean up resource created at FileIoProvider.java: [line 789] is not discharged   Redundant nullcheck of org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.fileIoProvider, which is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build() Redundant null check at FsVolumeImplBuilder.java:is known to be non-null in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImplBuilder.build() Redundant null check at FsVolumeImplBuilder.java: [line 74] Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.hdfs.TestTrashWithSecureEncryptionZones   hadoop.hdfs.TestDataTransferKeepalive   hadoop.hdfs.server.namenode.TestFsck   hadoop.hdfs.TestSecureEncryptionZoneWithKMS Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842678/HDFS-10958.02.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 2d791cbf363f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4c38f11 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17821/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17821/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17821/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          v03 patch - another attempt to suppress the findbug warnings.

          Show
          arpitagarwal Arpit Agarwal added a comment - v03 patch - another attempt to suppress the findbug warnings.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 0m 17s Maven dependency ordering for branch
          +1 mvninstall 6m 52s trunk passed
          +1 compile 9m 38s trunk passed
          +1 checkstyle 1m 47s trunk passed
          +1 mvnsite 2m 40s trunk passed
          +1 mvneclipse 0m 55s trunk passed
          +1 findbugs 4m 42s trunk passed
          +1 javadoc 1m 58s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 54s the patch passed
          +1 compile 9m 14s the patch passed
          +1 javac 9m 14s the patch passed
          -0 checkstyle 1m 48s root: The patch generated 5 new + 1159 unchanged - 10 fixed = 1164 total (was 1169)
          +1 mvnsite 2m 36s the patch passed
          +1 mvneclipse 0m 55s the patch passed
          -1 whitespace 0m 0s The patch has 8 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 2s The patch has no ill-formed XML file.
          +1 findbugs 5m 6s the patch passed
          +1 javadoc 1m 57s the patch passed
          -1 unit 8m 6s hadoop-common in the patch failed.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed.
          -1 unit 89m 26s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 35s The patch does not generate ASF License warnings.
          153m 23s



          Reason Tests
          Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.hdfs.TestDataTransferKeepalive
            hadoop.hdfs.server.namenode.TestFsck
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS
            hadoop.hdfs.TestTrashWithSecureEncryptionZones



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842722/HDFS-10958.03.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 3bfcfbbeef37 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4c38f11
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17825/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17825/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 6m 52s trunk passed +1 compile 9m 38s trunk passed +1 checkstyle 1m 47s trunk passed +1 mvnsite 2m 40s trunk passed +1 mvneclipse 0m 55s trunk passed +1 findbugs 4m 42s trunk passed +1 javadoc 1m 58s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 54s the patch passed +1 compile 9m 14s the patch passed +1 javac 9m 14s the patch passed -0 checkstyle 1m 48s root: The patch generated 5 new + 1159 unchanged - 10 fixed = 1164 total (was 1169) +1 mvnsite 2m 36s the patch passed +1 mvneclipse 0m 55s the patch passed -1 whitespace 0m 0s The patch has 8 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 5m 6s the patch passed +1 javadoc 1m 57s the patch passed -1 unit 8m 6s hadoop-common in the patch failed. +1 unit 1m 1s hadoop-hdfs-client in the patch passed. -1 unit 89m 26s hadoop-hdfs in the patch failed. +1 asflicense 0m 35s The patch does not generate ASF License warnings. 153m 23s Reason Tests Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.hdfs.TestDataTransferKeepalive   hadoop.hdfs.server.namenode.TestFsck   hadoop.hdfs.TestSecureEncryptionZoneWithKMS   hadoop.hdfs.TestTrashWithSecureEncryptionZones Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842722/HDFS-10958.03.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 3bfcfbbeef37 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4c38f11 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17825/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17825/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17825/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          v04 patch fixes TestFsck. The remaining test failures are unrelated to the patch.

          Show
          arpitagarwal Arpit Agarwal added a comment - v04 patch fixes TestFsck. The remaining test failures are unrelated to the patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 6m 48s trunk passed
          +1 compile 9m 40s trunk passed
          +1 checkstyle 1m 47s trunk passed
          +1 mvnsite 2m 40s trunk passed
          +1 mvneclipse 0m 55s trunk passed
          +1 findbugs 4m 43s trunk passed
          +1 javadoc 1m 57s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 52s the patch passed
          +1 compile 9m 16s the patch passed
          +1 javac 9m 16s the patch passed
          -0 checkstyle 1m 47s root: The patch generated 5 new + 1159 unchanged - 10 fixed = 1164 total (was 1169)
          +1 mvnsite 2m 36s the patch passed
          +1 mvneclipse 0m 56s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 2s The patch has no ill-formed XML file.
          +1 findbugs 5m 7s the patch passed
          +1 javadoc 1m 58s the patch passed
          -1 unit 8m 3s hadoop-common in the patch failed.
          +1 unit 1m 1s hadoop-hdfs-client in the patch passed.
          -1 unit 90m 45s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 33s The patch does not generate ASF License warnings.
          154m 33s



          Reason Tests
          Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS
            hadoop.hdfs.TestTrashWithSecureEncryptionZones
            hadoop.hdfs.TestDFSClientRetries



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842738/HDFS-10958.04.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 61efbcc9bec6 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 4c38f11
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17827/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17827/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 6m 48s trunk passed +1 compile 9m 40s trunk passed +1 checkstyle 1m 47s trunk passed +1 mvnsite 2m 40s trunk passed +1 mvneclipse 0m 55s trunk passed +1 findbugs 4m 43s trunk passed +1 javadoc 1m 57s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 52s the patch passed +1 compile 9m 16s the patch passed +1 javac 9m 16s the patch passed -0 checkstyle 1m 47s root: The patch generated 5 new + 1159 unchanged - 10 fixed = 1164 total (was 1169) +1 mvnsite 2m 36s the patch passed +1 mvneclipse 0m 56s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 2s The patch has no ill-formed XML file. +1 findbugs 5m 7s the patch passed +1 javadoc 1m 58s the patch passed -1 unit 8m 3s hadoop-common in the patch failed. +1 unit 1m 1s hadoop-hdfs-client in the patch passed. -1 unit 90m 45s hadoop-hdfs in the patch failed. +1 asflicense 0m 33s The patch does not generate ASF License warnings. 154m 33s Reason Tests Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.hdfs.TestSecureEncryptionZoneWithKMS   hadoop.hdfs.TestTrashWithSecureEncryptionZones   hadoop.hdfs.TestDFSClientRetries Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842738/HDFS-10958.04.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 61efbcc9bec6 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 4c38f11 Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17827/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17827/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17827/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Arpit Agarwal for updating the patch. I'll review the latest patch today.

          The following three unit test failures are tracked by HADOOP-13980.

          hadoop.security.token.delegation.web.TestWebDelegationToken
          hadoop.hdfs.TestSecureEncryptionZoneWithKMS
          hadoop.hdfs.TestTrashWithSecureEncryptionZones
          

          Can you confirm if the following unit test failure is related to this patch or not?

          hadoop.hdfs.TestDFSClientRetries
          
          Show
          xyao Xiaoyu Yao added a comment - Thanks Arpit Agarwal for updating the patch. I'll review the latest patch today. The following three unit test failures are tracked by HADOOP-13980 . hadoop.security.token.delegation.web.TestWebDelegationToken hadoop.hdfs.TestSecureEncryptionZoneWithKMS hadoop.hdfs.TestTrashWithSecureEncryptionZones Can you confirm if the following unit test failure is related to this patch or not? hadoop.hdfs.TestDFSClientRetries
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thanks Xiaoyu Yao, TestDFSClientRetries is unrelated. I ran it multiple times locally with my patch and it passed.

          Show
          arpitagarwal Arpit Agarwal added a comment - Thanks Xiaoyu Yao , TestDFSClientRetries is unrelated. I ran it multiple times locally with my patch and it passed.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Arpit Agarwal for working on this. The latest patch looks pretty good to me. Just have a few minor questions/issues below.

          1. NatievIO.java#getShareDeleteFileDescriptor
          NIT: Can you update the comment (line 745, line 747) to reflect the changes of the
          returned type? "FileInputStream" -> "FileDescriptor"

          2. BlockMetadataHeader.java
          Line 149: BlockMetadataHeader#readHeader(File file) can be removed

          Line 85: From the caller of BlockMetadataHeader#readDataChecksum() in
          FsDatasetImpl#computeChecksum, we can get a hook for FileInputStream. Is it possible
          to add hook for readDataCheckum into FileIoProvider or a WrappedFileInputStream
          for measurement of the reading performance.

          3. BlockReceiver.java
          NIT: Line 1033: BlockReceiver#adjustCrcFilePosition()
          can we use streams.flushChecksumOut() here?

          4. DatanodeUtil.java
          NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to FileIoProvider like
          we do for mkdirsWithExistsCheck/deleteWithExistsCheck?

          Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it.
          There seems to be no reference to this method. So maybe we can remove it.

          5. DFSConfigKeys.java
          NIT: Can you add a short description for the new key added or add cross reference to
          the description in FileIoProvider class description.

          6. FsDatasetImpl.java
          NIT: these imports re-ordered with the imports below it
          (only one added from this change though)
          import org.apache.hadoop.hdfs.DFSConfigKeys;
          import org.apache.hadoop.hdfs.DFSUtilClient;
          import org.apache.hadoop.hdfs.ExtendedBlockId;
          import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
          import org.apache.hadoop.util.AutoCloseableLock;

          7. FSVolumeImpl.java
          Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into FileIoProvider.java to
          get some aggregated metrics of dirNoFilesRecursive() in addition to FileIoProvider#listFiles().

          8. LocalReplica.java
          Line: 202: this is a bug. We should delete the tmpFile instead of the file.

          if (!fileIoProvider.delete(getVolume(), file)) 
          

          9. LocalReplicaInPipeline.java
          Line 322,323: Should we close crcOut like blockOut and metataRAF here?
          Can this be improved with a try-with-resource to avoid leaking.

          10. FileIoEvents.java
          Line 89: FileIoEvents#onFailure() can we add a begin parameter for the failure
          code path so that we can track the time spent on FileIo/Metadata before failure.

          11. CountingFileIoEvents.java
          Should we count the number of errors in onFailure()?

          12. FileIoProvider.java
          NIT: some of the methods are missing Javadocs for the last few added
          @param such as flush()/listDirectory()/linkCount()/mkdirs, etc.

          Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to explicitly
          describe the operation type FileIo/Metadata, which could simplify the FileIoEvents interface.
          I'm OK with the current implementation, which is also good and easy to follow.

          Line 155: I think we should put sync() under fileIo op instead of metadata op based
          on we are passing true to

          fos.getChannel().force(true);

          , which force
          both metadata and data written on device.

          Line 459: FileIoProvider#fullyDelete() should we declare exception just for fault
          injection purpose? FileUtil.fullyDelete() itself does not throw.

          Line 575: NIT: File f -> File dir
          Line 598: NIT: File f -> File dir

          13. ReplicaOutputStreams.java
          Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change
          the dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream
          to get the FileIo write counted properly?

          14. ReplicaInputStreams.java
          Line 83 readDataFully() should we change the dataIn/checksumIn
          to use the FileIoProvider#WrappedFileInputStream to get the FileIo read counted properly?

          Show
          xyao Xiaoyu Yao added a comment - Thanks Arpit Agarwal for working on this. The latest patch looks pretty good to me. Just have a few minor questions/issues below. 1. NatievIO.java#getShareDeleteFileDescriptor NIT: Can you update the comment (line 745, line 747) to reflect the changes of the returned type? "FileInputStream" -> "FileDescriptor" 2. BlockMetadataHeader.java Line 149: BlockMetadataHeader#readHeader(File file) can be removed Line 85: From the caller of BlockMetadataHeader#readDataChecksum() in FsDatasetImpl#computeChecksum, we can get a hook for FileInputStream. Is it possible to add hook for readDataCheckum into FileIoProvider or a WrappedFileInputStream for measurement of the reading performance. 3. BlockReceiver.java NIT: Line 1033: BlockReceiver#adjustCrcFilePosition() can we use streams.flushChecksumOut() here? 4. DatanodeUtil.java NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to FileIoProvider like we do for mkdirsWithExistsCheck/deleteWithExistsCheck? Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it. There seems to be no reference to this method. So maybe we can remove it. 5. DFSConfigKeys.java NIT: Can you add a short description for the new key added or add cross reference to the description in FileIoProvider class description. 6. FsDatasetImpl.java NIT: these imports re-ordered with the imports below it (only one added from this change though) import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.hdfs.DFSUtilClient; import org.apache.hadoop.hdfs.ExtendedBlockId; import org.apache.hadoop.hdfs.server.datanode.FileIoProvider; import org.apache.hadoop.util.AutoCloseableLock; 7. FSVolumeImpl.java Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into FileIoProvider.java to get some aggregated metrics of dirNoFilesRecursive() in addition to FileIoProvider#listFiles(). 8. LocalReplica.java Line: 202: this is a bug. We should delete the tmpFile instead of the file. if (!fileIoProvider.delete(getVolume(), file)) 9. LocalReplicaInPipeline.java Line 322,323: Should we close crcOut like blockOut and metataRAF here? Can this be improved with a try-with-resource to avoid leaking. 10. FileIoEvents.java Line 89: FileIoEvents#onFailure() can we add a begin parameter for the failure code path so that we can track the time spent on FileIo/Metadata before failure. 11. CountingFileIoEvents.java Should we count the number of errors in onFailure()? 12. FileIoProvider.java NIT: some of the methods are missing Javadocs for the last few added @param such as flush()/listDirectory()/linkCount()/mkdirs, etc. Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to explicitly describe the operation type FileIo/Metadata, which could simplify the FileIoEvents interface. I'm OK with the current implementation, which is also good and easy to follow. Line 155: I think we should put sync() under fileIo op instead of metadata op based on we are passing true to fos.getChannel().force( true ); , which force both metadata and data written on device. Line 459: FileIoProvider#fullyDelete() should we declare exception just for fault injection purpose? FileUtil.fullyDelete() itself does not throw. Line 575: NIT: File f -> File dir Line 598: NIT: File f -> File dir 13. ReplicaOutputStreams.java Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change the dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream to get the FileIo write counted properly? 14. ReplicaInputStreams.java Line 83 readDataFully() should we change the dataIn/checksumIn to use the FileIoProvider#WrappedFileInputStream to get the FileIo read counted properly?
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thank you for the thorough review Xiaoyu Yao! I really appreciate it. The v05 patch addresses most of your feedback (comments below). This commit shows the delta between the v04 and v05 patches.

          Comments below:

          1. NIT: Can you update the comment (line 745, line 747) to reflect the changes of the returned type? "FileInputStream" -> "FileDescriptor"

            Fixed.

          2. Line 149: BlockMetadataHeader#readHeader(File file) can be removed

            Removed.

          3. NIT: Line 1033: BlockReceiver#adjustCrcFilePosition(). can we use streams.flushChecksumOut() here?

            We need to call flush on the buffered output stream here. Calling streams.flushChecksumOut() will not flush the buffered data to underlying FileOutputStream.

          4. NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to FileIoProvider like we do for mkdirsWithExistsCheck/deleteWithExistsCheck?

            This method was awkward to adapt to the call pattern in FileIoProvider. However I do pass individual operations to the FileIoProvider so the exists/create calls will be instrumented. Let me know if you feel strongly about it.

          5. Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it.

            Done. Removed the unused method.

          6. NIT: Can you add a short description for the new key added or add cross reference to the description in FileIoProvider class description.

            I intentionally haven't documented this key as it's not targeted for end users. I have the following text in the FileIoProvider javadoc. Let me know if this looks sufficient for now.

             * Behavior can be injected into these events by implementing
             * {@link FileIoEvents} and replacing the default implementation
             * with {@link DFSConfigKeys#DFS_DATANODE_FILE_IO_EVENTS_CLASS_KEY}.
            
          7. NIT: these imports re-ordered with the imports below it

            I don't see this issue in my diffs. Let me know if you still see it.

          8. Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into FileIoProvider.java to get some aggregated metrics of dirNoFilesRecursive() in addition to FileIoProvider#listFiles().

            I deferred doing since any disk slowness will show up in the fileIoProvider.listFiles call. Can we re-evaluate instrumenting the recursive call in a follow up jira?

          9. Line: 202: this is a bug. We should delete the tmpFile instead of the file.

            Good catch, fixed.

          10. Line 322,323: Should we close crcOut like blockOut and metataRAF here? Can this be improved with a try-with-resource to avoid leaking.

            Good catch, fixed it. It looks like this is a pre-existing bug. We can't use try-with-resources though as we only want to close the streams when there is an exception.

          11. Line 89: FileIoEvents#onFailure() can we add a begin parameter for the failure code path so that we can track the time spent on FileIo/Metadata before failure.

            Done.

          12. CountingFileIoEvents.java - Should we count the number of errors in onFailure()?

            Done.

          13. FileIoProvider.java - NIT: some of the methods are missing Javadocs for the last few added @param such as flush()/listDirectory()/linkCount()/mkdirs, etc.

            Added.

          14. Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to explicitly describe the operation type FileIo/Metadata, which could simplify the FileIoEvents interface. I'm OK with the current implementation, which is also good and easy to follow.

            Leaving it as it is for now to avoid complicating the patch further, but we can definitely revise the interface as we work on implementations.

          15. Line 155: I think we should put sync() under fileIo op instead of metadata op based on we are passing true

            Done.

          16. Line 459: FileIoProvider#fullyDelete() should we declare exception just for fault injection purpose? FileUtil.fullyDelete() itself does not throw.

            Good point. The only exception we could get in fullyDelete is a RuntimeException so there is no change to the signature. I decided to pass all exceptions to the failure handler (except errors) and let it decide which ones are interesting to it.

          17. Line 575: NIT: File f -> File dir, Line 598: NIT: File f -> File dir

            Fixed both.

          18. Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change the dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream to get the FileIo write counted properly?

            These are already wrapped output streams. See LocalReplicaInPipeline.java:310.

          19. Line 83 readDataFully() should we change the dataIn/checksumIn to use the FileIoProvider#WrappedFileInputStream to get the FileIo read counted properly?

            These are also wrapped input streams. See LocalReplica#getDataInputStream where the streams are allocated.

          I also removed the gson dependency per offline feedback from Anu Engineer.

          Show
          arpitagarwal Arpit Agarwal added a comment - Thank you for the thorough review Xiaoyu Yao ! I really appreciate it. The v05 patch addresses most of your feedback (comments below). This commit shows the delta between the v04 and v05 patches. Comments below: NIT: Can you update the comment (line 745, line 747) to reflect the changes of the returned type? "FileInputStream" -> "FileDescriptor" Fixed. Line 149: BlockMetadataHeader#readHeader(File file) can be removed Removed. NIT: Line 1033: BlockReceiver#adjustCrcFilePosition(). can we use streams.flushChecksumOut() here? We need to call flush on the buffered output stream here. Calling streams.flushChecksumOut() will not flush the buffered data to underlying FileOutputStream. NIT: Line 59: Can we move DatanodeUtil#createFileWithExistsCheck to FileIoProvider like we do for mkdirsWithExistsCheck/deleteWithExistsCheck? This method was awkward to adapt to the call pattern in FileIoProvider. However I do pass individual operations to the FileIoProvider so the exists/create calls will be instrumented. Let me know if you feel strongly about it. Line 1365: DataStorage#fullyDelete(). I'm OK with deprecate it. Done. Removed the unused method. NIT: Can you add a short description for the new key added or add cross reference to the description in FileIoProvider class description. I intentionally haven't documented this key as it's not targeted for end users. I have the following text in the FileIoProvider javadoc. Let me know if this looks sufficient for now. * Behavior can be injected into these events by implementing * {@link FileIoEvents} and replacing the default implementation * with {@link DFSConfigKeys#DFS_DATANODE_FILE_IO_EVENTS_CLASS_KEY}. NIT: these imports re-ordered with the imports below it I don't see this issue in my diffs. Let me know if you still see it. Line 1075: DatanodeUtil.dirNoFilesRecursive() can be wrapped into FileIoProvider.java to get some aggregated metrics of dirNoFilesRecursive() in addition to FileIoProvider#listFiles(). I deferred doing since any disk slowness will show up in the fileIoProvider.listFiles call. Can we re-evaluate instrumenting the recursive call in a follow up jira? Line: 202: this is a bug. We should delete the tmpFile instead of the file. Good catch, fixed. Line 322,323: Should we close crcOut like blockOut and metataRAF here? Can this be improved with a try-with-resource to avoid leaking. Good catch, fixed it. It looks like this is a pre-existing bug. We can't use try-with-resources though as we only want to close the streams when there is an exception. Line 89: FileIoEvents#onFailure() can we add a begin parameter for the failure code path so that we can track the time spent on FileIo/Metadata before failure. Done. CountingFileIoEvents.java - Should we count the number of errors in onFailure()? Done. FileIoProvider.java - NIT: some of the methods are missing Javadocs for the last few added @param such as flush()/listDirectory()/linkCount()/mkdirs, etc. Added. Line 105: NIT: We can add a tag to the enum FileIoProvider#OPERATION to explicitly describe the operation type FileIo/Metadata, which could simplify the FileIoEvents interface. I'm OK with the current implementation, which is also good and easy to follow. Leaving it as it is for now to avoid complicating the patch further, but we can definitely revise the interface as we work on implementations. Line 155: I think we should put sync() under fileIo op instead of metadata op based on we are passing true Done. Line 459: FileIoProvider#fullyDelete() should we declare exception just for fault injection purpose? FileUtil.fullyDelete() itself does not throw. Good point. The only exception we could get in fullyDelete is a RuntimeException so there is no change to the signature. I decided to pass all exceptions to the failure handler (except errors) and let it decide which ones are interesting to it. Line 575: NIT: File f -> File dir, Line 598: NIT: File f -> File dir Fixed both. Line 148: ReplicaOutputStreams#writeDataToDisk(), should we change the dataOut/checksumOut to use the FileIoProvider#WrappedFileoutputStream to get the FileIo write counted properly? These are already wrapped output streams. See LocalReplicaInPipeline.java:310. Line 83 readDataFully() should we change the dataIn/checksumIn to use the FileIoProvider#WrappedFileInputStream to get the FileIo read counted properly? These are also wrapped input streams. See LocalReplica#getDataInputStream where the streams are allocated. I also removed the gson dependency per offline feedback from Anu Engineer .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 35s Maven dependency ordering for branch
          +1 mvninstall 6m 57s trunk passed
          +1 compile 10m 7s trunk passed
          +1 checkstyle 1m 48s trunk passed
          +1 mvnsite 2m 41s trunk passed
          +1 mvneclipse 0m 56s trunk passed
          +1 findbugs 4m 43s trunk passed
          +1 javadoc 1m 59s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 53s the patch passed
          +1 compile 9m 17s the patch passed
          +1 javac 9m 17s the patch passed
          -0 checkstyle 1m 47s root: The patch generated 6 new + 1160 unchanged - 10 fixed = 1166 total (was 1170)
          +1 mvnsite 2m 37s the patch passed
          +1 mvneclipse 0m 54s the patch passed
          -1 whitespace 0m 0s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 3s The patch has no ill-formed XML file.
          +1 findbugs 5m 7s the patch passed
          +1 javadoc 1m 58s the patch passed
          -1 unit 7m 7s hadoop-common in the patch failed.
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed.
          -1 unit 83m 23s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 35s The patch does not generate ASF License warnings.
          148m 22s



          Reason Tests
          Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.hdfs.TestTrashWithSecureEncryptionZones
            hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842913/HDFS-10958.05.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux f9c3246c405c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2d4731c
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17844/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17844/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 35s Maven dependency ordering for branch +1 mvninstall 6m 57s trunk passed +1 compile 10m 7s trunk passed +1 checkstyle 1m 48s trunk passed +1 mvnsite 2m 41s trunk passed +1 mvneclipse 0m 56s trunk passed +1 findbugs 4m 43s trunk passed +1 javadoc 1m 59s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 53s the patch passed +1 compile 9m 17s the patch passed +1 javac 9m 17s the patch passed -0 checkstyle 1m 47s root: The patch generated 6 new + 1160 unchanged - 10 fixed = 1166 total (was 1170) +1 mvnsite 2m 37s the patch passed +1 mvneclipse 0m 54s the patch passed -1 whitespace 0m 0s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 3s The patch has no ill-formed XML file. +1 findbugs 5m 7s the patch passed +1 javadoc 1m 58s the patch passed -1 unit 7m 7s hadoop-common in the patch failed. +1 unit 1m 0s hadoop-hdfs-client in the patch passed. -1 unit 83m 23s hadoop-hdfs in the patch failed. +1 asflicense 0m 35s The patch does not generate ASF License warnings. 148m 22s Reason Tests Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.hdfs.TestTrashWithSecureEncryptionZones   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker   hadoop.hdfs.TestSecureEncryptionZoneWithKMS Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842913/HDFS-10958.05.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux f9c3246c405c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2d4731c Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17844/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17844/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17844/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Address two more issues raised by Xiaoyu Yao.

          Delta commit from v04 -> v05 patch is here:
          https://github.com/arp7/hadoop/commit/bf8c2eb3013f3dc1354b1db207ef046cd20e6782

          Show
          arpitagarwal Arpit Agarwal added a comment - Address two more issues raised by Xiaoyu Yao . Delta commit from v04 -> v05 patch is here: https://github.com/arp7/hadoop/commit/bf8c2eb3013f3dc1354b1db207ef046cd20e6782
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
          0 mvndep 1m 47s Maven dependency ordering for branch
          +1 mvninstall 7m 10s trunk passed
          +1 compile 9m 50s trunk passed
          +1 checkstyle 2m 9s trunk passed
          +1 mvnsite 2m 46s trunk passed
          +1 mvneclipse 0m 55s trunk passed
          +1 findbugs 4m 45s trunk passed
          +1 javadoc 1m 59s trunk passed
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 1m 55s the patch passed
          +1 compile 9m 37s the patch passed
          +1 javac 9m 37s the patch passed
          -0 checkstyle 1m 44s root: The patch generated 6 new + 1159 unchanged - 10 fixed = 1165 total (was 1169)
          +1 mvnsite 2m 58s the patch passed
          +1 mvneclipse 0m 50s the patch passed
          -1 whitespace 0m 0s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 5m 53s the patch passed
          +1 javadoc 2m 3s the patch passed
          -1 unit 8m 32s hadoop-common in the patch failed.
          +1 unit 1m 7s hadoop-hdfs-client in the patch passed.
          -1 unit 66m 22s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 34s The patch does not generate ASF License warnings.
          134m 45s



          Reason Tests
          Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken
            hadoop.net.TestDNS
            hadoop.hdfs.TestTrashWithSecureEncryptionZones
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-10958
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843122/HDFS-10958.06.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux c971110f78ef 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 9947aeb
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/diff-checkstyle-root.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17851/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17851/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 1m 47s Maven dependency ordering for branch +1 mvninstall 7m 10s trunk passed +1 compile 9m 50s trunk passed +1 checkstyle 2m 9s trunk passed +1 mvnsite 2m 46s trunk passed +1 mvneclipse 0m 55s trunk passed +1 findbugs 4m 45s trunk passed +1 javadoc 1m 59s trunk passed 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 1m 55s the patch passed +1 compile 9m 37s the patch passed +1 javac 9m 37s the patch passed -0 checkstyle 1m 44s root: The patch generated 6 new + 1159 unchanged - 10 fixed = 1165 total (was 1169) +1 mvnsite 2m 58s the patch passed +1 mvneclipse 0m 50s the patch passed -1 whitespace 0m 0s The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 5m 53s the patch passed +1 javadoc 2m 3s the patch passed -1 unit 8m 32s hadoop-common in the patch failed. +1 unit 1m 7s hadoop-hdfs-client in the patch passed. -1 unit 66m 22s hadoop-hdfs in the patch failed. +1 asflicense 0m 34s The patch does not generate ASF License warnings. 134m 45s Reason Tests Failed junit tests hadoop.security.token.delegation.web.TestWebDelegationToken   hadoop.net.TestDNS   hadoop.hdfs.TestTrashWithSecureEncryptionZones   hadoop.hdfs.TestSecureEncryptionZoneWithKMS Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-10958 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843122/HDFS-10958.06.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux c971110f78ef 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 9947aeb Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/diff-checkstyle-root.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/17851/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17851/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17851/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xyao Xiaoyu Yao added a comment -

          Thanks Arpit Agarwal for updating the patch. +1 for v6 patch.
          I've validated the unit tests failures are unrelated. The checkstyle and whitespace issue can be fixed at commit.

          Show
          xyao Xiaoyu Yao added a comment - Thanks Arpit Agarwal for updating the patch. +1 for v6 patch. I've validated the unit tests failures are unrelated. The checkstyle and whitespace issue can be fixed at commit.
          Hide
          arpitagarwal Arpit Agarwal added a comment -

          Thank you Xiaoyu Yao. Fixed the following checkstyle issue and pushed to trunk.

          --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
          +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
          @@ -31,7 +31,6 @@
          
           import org.apache.hadoop.classification.InterfaceAudience;
           import org.apache.hadoop.classification.InterfaceStability;
          -import org.apache.hadoop.io.IOUtils;
           import org.apache.hadoop.util.DataChecksum;
          

          The remaining checkstyle issues are unrelated to the patch (there is one false positive).

          Show
          arpitagarwal Arpit Agarwal added a comment - Thank you Xiaoyu Yao . Fixed the following checkstyle issue and pushed to trunk. --- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java +++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java @@ -31,7 +31,6 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; - import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.util.DataChecksum; The remaining checkstyle issues are unrelated to the patch (there is one false positive).
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10997 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10997/)
          HDFS-10958. Add instrumentation hooks around Datanode disk IO. (arp: rev 6ba9587d370fbf39c129c08c00ebbb894ccc1389)

          • (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoProvider.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplicaInPipeline.java
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DefaultFileIoEvents.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaOutputStreams.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoEvents.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java
          • (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/CountingFileIoEvents.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10997 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10997/ ) HDFS-10958 . Add instrumentation hooks around Datanode disk IO. (arp: rev 6ba9587d370fbf39c129c08c00ebbb894ccc1389) (edit) hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoProvider.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplicaInPipeline.java (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DefaultFileIoEvents.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaOutputStreams.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FileIoEvents.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalVolumeImpl.java (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/CountingFileIoEvents.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeMXBean.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Cloned this as HDFS-11337 for the branch-2 backport.

          The backport required many changes (due to branch divergence caused by HDFS-10637) so I'm requesting a careful review of the branch-2 patch too.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Cloned this as HDFS-11337 for the branch-2 backport. The backport required many changes (due to branch divergence caused by HDFS-10637 ) so I'm requesting a careful review of the branch-2 patch too.

            People

            • Assignee:
              arpitagarwal Arpit Agarwal
              Reporter:
              xyao Xiaoyu Yao
            • Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development