Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-10372

Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.7.3
    • Fix Version/s: 2.8.0, 2.7.3, 3.0.0-alpha1
    • Component/s: test
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      TestFsDatasetImpl#testCleanShutdownOfVolume fails very often.
      We added more debug information in HDFS-10260 to find out why this test is failing.
      Now I think I know the root cause of failure.
      I thought that LocatedBlock#getLocations() returns an array of DatanodeInfo but now I realized that it returns an array of DatandeStorageInfo (which is subclass of DatanodeInfo).
      In the test I intended to check whether the exception contains the xfer address of the DatanodeInfo. Since DatanodeInfo#toString() method returns the xfer address, I checked whether exception contains DatanodeInfo#toString or not.
      But since LocatedBlock#getLocations() returned an array of DatanodeStorageInfo, it has storage info in the toString() implementation.

      1. HDFS-10372.patch
        1.0 kB
        Rushabh S Shah

        Issue Links

          Activity

          Hide
          vinodkv Vinod Kumar Vavilapalli added a comment -

          Closing the JIRA as part of 2.7.3 release.

          Show
          vinodkv Vinod Kumar Vavilapalli added a comment - Closing the JIRA as part of 2.7.3 release.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9737 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9737/)
          HDFS-10372. Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume. (kihwal: rev b9e5a32fa14b727b44118ec7f43fb95de05a7c2c)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9737 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9737/ ) HDFS-10372 . Fix for failing TestFsDatasetImpl#testCleanShutdownOfVolume. (kihwal: rev b9e5a32fa14b727b44118ec7f43fb95de05a7c2c) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Thanks Kihwal Lee for reviews and committing.
          Thanks Xiao Chen and Masatake Iwasaki for reviews.

          Show
          shahrs87 Rushabh S Shah added a comment - Thanks Kihwal Lee for reviews and committing. Thanks Xiao Chen and Masatake Iwasaki for reviews.
          Hide
          kihwal Kihwal Lee added a comment -

          I've committed this to trunk through branch-2.7. Thanks for fixing this, Rushabh S Shah. Thanks for valuable reviews, Xiao Chen and Masatake Iwasaki.

          Show
          kihwal Kihwal Lee added a comment - I've committed this to trunk through branch-2.7. Thanks for fixing this, Rushabh S Shah . Thanks for valuable reviews, Xiao Chen and Masatake Iwasaki .
          Hide
          kihwal Kihwal Lee added a comment - - edited

          Masatake Iwasaki has already +1'ed it, meaning the suggested change is not strictly necessary.
          I am committing this as is.

          Show
          kihwal Kihwal Lee added a comment - - edited Masatake Iwasaki has already +1'ed it, meaning the suggested change is not strictly necessary. I am committing this as is.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          it's up to you to decide whether to improve it as Masatake Iwasaki suggested.

          I don't think this is required.
          I edited the test on my machine to create one file after one of the volume gone bad. It is able to create the file and I can see the block on the datanode's good volume. But I don't see any value adding it to patch.
          Masatake Iwasaki: Let me know if you still want me to create a new file. I will edit my patch.

          Show
          shahrs87 Rushabh S Shah added a comment - it's up to you to decide whether to improve it as Masatake Iwasaki suggested. I don't think this is required. I edited the test on my machine to create one file after one of the volume gone bad. It is able to create the file and I can see the block on the datanode's good volume. But I don't see any value adding it to patch. Masatake Iwasaki : Let me know if you still want me to create a new file. I will edit my patch.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          The test expected that the message in exception on out.close() contains the name of failed volume (to which the replica was written) but it contained only info about live volume (data2).

          When the client asked for locations for first block, namenode selected a datanode with any random storage info within that datanode.
          Refer to DataStreamer.locateFollowingBlock(DatanodeInfo[] excludedNodes) method for more details.
          When the client started writing to datanode, the datanode selects a volume accordingto RoundRobinVolumeChoosingPolicy policy and it can select a storage which is different than what namenode has stored in its triplets.
          When the datanode sends an IBR (with RECIEVING_BLOCK), the namenode will change the storage info in its triplets with the storage info which datanode reported.
          But the change in storage info is not propogated back to client.
          So the client still has stale storage info.
          When the client tried to close the file, the datanode threw an exception (since the volume has gone bad) but since the client has stale storage info, it saved the exception with the old storage info.
          This is the reason why the test was flaky in the first place.
          In my machine, the test finishes within 2 seconds. So the datanode didn't send any IBR and the storage info was not changed in namenode.
          But in the jenkins build machines, the test ran for more than 8 seconds which gave datanode ample of time to send an IBR.
          Masatake Iwasaki: I hope this answers your question.

          Show
          shahrs87 Rushabh S Shah added a comment - The test expected that the message in exception on out.close() contains the name of failed volume (to which the replica was written) but it contained only info about live volume (data2). When the client asked for locations for first block, namenode selected a datanode with any random storage info within that datanode. Refer to DataStreamer.locateFollowingBlock(DatanodeInfo[] excludedNodes) method for more details. When the client started writing to datanode, the datanode selects a volume accordingto RoundRobinVolumeChoosingPolicy policy and it can select a storage which is different than what namenode has stored in its triplets. When the datanode sends an IBR (with RECIEVING_BLOCK), the namenode will change the storage info in its triplets with the storage info which datanode reported. But the change in storage info is not propogated back to client. So the client still has stale storage info. When the client tried to close the file, the datanode threw an exception (since the volume has gone bad) but since the client has stale storage info, it saved the exception with the old storage info. This is the reason why the test was flaky in the first place. In my machine, the test finishes within 2 seconds. So the datanode didn't send any IBR and the storage info was not changed in namenode. But in the jenkins build machines, the test ran for more than 8 seconds which gave datanode ample of time to send an IBR. Masatake Iwasaki : I hope this answers your question.
          Hide
          kihwal Kihwal Lee added a comment -

          Rushabh S Shah, it's up to you to decide whether to improve it as Masatake Iwasaki suggested.

          Show
          kihwal Kihwal Lee added a comment - Rushabh S Shah , it's up to you to decide whether to improve it as Masatake Iwasaki suggested.
          Hide
          iwasakims Masatake Iwasaki added a comment -

          I'm +1 too on this, though I think it would be better to create and write another file after one of the volume is removed in order to make it sure that the datanode is still available.

          I put the error logs on my environment as a reference.

          DataNode in mini cluster was started with 2 volumes (data1 and data2).

          2016-05-08 10:00:39,003 [DataNode: [[[DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:37720] INFO  common.Storage (DataStorage.java:createStorageID(158)) - Generated new storageID DS-040e0757-ea7f-4465-80e3-9f8c00abeb83 for directory /home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1
          ...(snip)
          2016-05-08 10:00:39,109 [DataNode: [[[DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]  heartbeating to localhost/127.0.0.1:37720] INFO  common.Storage (DataStorage.java:createStorageID(158)) - Generated new storageID DS-8f82ba58-61ae-4cb1-b019-0c387d25b5d2 for directory /home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2
          

          The volume of data1 was removed since the test broke it.

          2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(453)) - Number of failed storages changes from 0 to 1
          2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateFailedStorage(539)) - [DISK]DS-040e0757-ea7f-4465-80e3-9f8c00abeb83:NORMAL:127.0.0.1:59604 failed.
          2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO  blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:pruneStorageMap(525)) - Removed storage [DISK]DS-040e0757-ea7f-4465-80e3-9f8c00abeb83:FAILED:127.0.0.1:59604 from DataNode 127.0.0.1:59604
          

          The test expected that the message in exception on out.close() contains the name of failed volume (to which the replica was written) but it contained only info about live volume (data2).

          testCleanShutdownOfVolume(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl)  Time elapsed: 8.468 sec  <<< FAILURE!
          java.lang.AssertionError: Expected to find 'DatanodeInfoWithStorage[127.0.0.1:59604,DS-040e0757-ea7f-4465-80e3-9f8c00abeb83,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:59604,DS-8f82ba58-61ae-4cb1-b019-0c387d25b5d2,DISK]] are bad. Aborting...
          	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1395)
          	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
          	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325)
          	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
          	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)
          
          Show
          iwasakims Masatake Iwasaki added a comment - I'm +1 too on this, though I think it would be better to create and write another file after one of the volume is removed in order to make it sure that the datanode is still available. I put the error logs on my environment as a reference. DataNode in mini cluster was started with 2 volumes (data1 and data2). 2016-05-08 10:00:39,003 [DataNode: [[[DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:37720] INFO common.Storage (DataStorage.java:createStorageID(158)) - Generated new storageID DS-040e0757-ea7f-4465-80e3-9f8c00abeb83 for directory /home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1 ...(snip) 2016-05-08 10:00:39,109 [DataNode: [[[DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/, [DISK]file:/home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]] heartbeating to localhost/127.0.0.1:37720] INFO common.Storage (DataStorage.java:createStorageID(158)) - Generated new storageID DS-8f82ba58-61ae-4cb1-b019-0c387d25b5d2 for directory /home/iwasakims/srcs/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2 The volume of data1 was removed since the test broke it. 2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateHeartbeatState(453)) - Number of failed storages changes from 0 to 1 2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:updateFailedStorage(539)) - [DISK]DS-040e0757-ea7f-4465-80e3-9f8c00abeb83:NORMAL:127.0.0.1:59604 failed. 2016-05-08 10:00:42,321 [IPC Server handler 4 on 37720] INFO blockmanagement.DatanodeDescriptor (DatanodeDescriptor.java:pruneStorageMap(525)) - Removed storage [DISK]DS-040e0757-ea7f-4465-80e3-9f8c00abeb83:FAILED:127.0.0.1:59604 from DataNode 127.0.0.1:59604 The test expected that the message in exception on out.close() contains the name of failed volume (to which the replica was written) but it contained only info about live volume (data2). testCleanShutdownOfVolume(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl) Time elapsed: 8.468 sec <<< FAILURE! java.lang.AssertionError: Expected to find 'DatanodeInfoWithStorage[127.0.0.1:59604,DS-040e0757-ea7f-4465-80e3-9f8c00abeb83,DISK]' but got unexpected exception:java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:59604,DS-8f82ba58-61ae-4cb1-b019-0c387d25b5d2,DISK]] are bad. Aborting... at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1395) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1325) at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:549)
          Hide
          xiaochen Xiao Chen added a comment -

          I see... thanks for the explanation!

          Show
          xiaochen Xiao Chen added a comment - I see... thanks for the explanation!
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Thanks for the review.

          Show
          shahrs87 Rushabh S Shah added a comment - Thanks for the review.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          I linked this to HDFS-10260.

          Thanks !

          Show
          shahrs87 Rushabh S Shah added a comment - I linked this to HDFS-10260 . Thanks !
          Hide
          shahrs87 Rushabh S Shah added a comment -

          s/DatanodeStorageInfo/DatanodeInfoWithStorage
          The names are so confusing.
          DatanodeInfoWithStorage is a sub class of DatanodeInfo.

          Show
          shahrs87 Rushabh S Shah added a comment - s/DatanodeStorageInfo/DatanodeInfoWithStorage The names are so confusing. DatanodeInfoWithStorage is a sub class of DatanodeInfo.
          Hide
          xiaochen Xiao Chen added a comment -

          Thanks for the contribution Rushabh S Shah and Kihwal Lee. I linked this to HDFS-10260.

          I thought that LocatedBlock#getLocations() returns an array of DatanodeInfo but now I realized that it returns an array of DatandeStorageInfo (which is subclass of DatanodeInfo).

          Sorry I don't understand this statement. DatanodeID appears to be a superclass of DatanodeInfo, and has the getXferAddr method. DatanodeStorageInfo is not a subclass of of the above. Fix makes sense to me though (verifying the xferAddr).

          I guess you meant to say DatanodeID#toString (which calls getXferAddr()) overrides DatanodeInfo#toString (which doesn't)?

          Show
          xiaochen Xiao Chen added a comment - Thanks for the contribution Rushabh S Shah and Kihwal Lee . I linked this to HDFS-10260 . I thought that LocatedBlock#getLocations() returns an array of DatanodeInfo but now I realized that it returns an array of DatandeStorageInfo (which is subclass of DatanodeInfo). Sorry I don't understand this statement. DatanodeID appears to be a superclass of DatanodeInfo , and has the getXferAddr method. DatanodeStorageInfo is not a subclass of of the above. Fix makes sense to me though (verifying the xferAddr). I guess you meant to say DatanodeID#toString (which calls getXferAddr() ) overrides DatanodeInfo#toString (which doesn't)?
          Hide
          kihwal Kihwal Lee added a comment -

          +1 I will commit it shortly

          Show
          kihwal Kihwal Lee added a comment - +1 I will commit it shortly
          Hide
          shahrs87 Rushabh S Shah added a comment -

          I don't think this patch have caused all the failing tests since the patch only changes one line in the test case.

          Show
          shahrs87 Rushabh S Shah added a comment - I don't think this patch have caused all the failing tests since the patch only changes one line in the test case.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 6m 42s trunk passed
          +1 compile 0m 41s trunk passed with JDK v1.8.0_91
          +1 compile 0m 40s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 21s trunk passed
          +1 mvnsite 0m 50s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 53s trunk passed
          +1 javadoc 1m 2s trunk passed with JDK v1.8.0_91
          +1 javadoc 1m 47s trunk passed with JDK v1.7.0_95
          +1 mvninstall 0m 47s the patch passed
          +1 compile 0m 38s the patch passed with JDK v1.8.0_91
          +1 javac 0m 38s the patch passed
          +1 compile 0m 39s the patch passed with JDK v1.7.0_95
          +1 javac 0m 39s the patch passed
          +1 checkstyle 0m 17s the patch passed
          +1 mvnsite 0m 48s the patch passed
          +1 mvneclipse 0m 11s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 10s the patch passed
          +1 javadoc 1m 3s the patch passed with JDK v1.8.0_91
          +1 javadoc 1m 42s the patch passed with JDK v1.7.0_95
          -1 unit 57m 31s hadoop-hdfs in the patch failed with JDK v1.8.0_91.
          -1 unit 54m 49s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          137m 19s



          Reason Tests
          JDK v1.8.0_91 Failed junit tests hadoop.hdfs.server.namenode.TestDecommissioningStatus
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestEditLog
            hadoop.hdfs.TestHFlush



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:cf2ee45
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802583/HDFS-10372.patch
          JIRA Issue HDFS-10372
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 33b258198e8f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 8d48266
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15379/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15379/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 6m 42s trunk passed +1 compile 0m 41s trunk passed with JDK v1.8.0_91 +1 compile 0m 40s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 21s trunk passed +1 mvnsite 0m 50s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 53s trunk passed +1 javadoc 1m 2s trunk passed with JDK v1.8.0_91 +1 javadoc 1m 47s trunk passed with JDK v1.7.0_95 +1 mvninstall 0m 47s the patch passed +1 compile 0m 38s the patch passed with JDK v1.8.0_91 +1 javac 0m 38s the patch passed +1 compile 0m 39s the patch passed with JDK v1.7.0_95 +1 javac 0m 39s the patch passed +1 checkstyle 0m 17s the patch passed +1 mvnsite 0m 48s the patch passed +1 mvneclipse 0m 11s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 10s the patch passed +1 javadoc 1m 3s the patch passed with JDK v1.8.0_91 +1 javadoc 1m 42s the patch passed with JDK v1.7.0_95 -1 unit 57m 31s hadoop-hdfs in the patch failed with JDK v1.8.0_91. -1 unit 54m 49s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 137m 19s Reason Tests JDK v1.8.0_91 Failed junit tests hadoop.hdfs.server.namenode.TestDecommissioningStatus JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestEditLog   hadoop.hdfs.TestHFlush Subsystem Report/Notes Docker Image:yetus/hadoop:cf2ee45 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802583/HDFS-10372.patch JIRA Issue HDFS-10372 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 33b258198e8f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 8d48266 Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_91 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_91.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15379/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15379/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15379/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          shahrs87 Rushabh S Shah added a comment -

          Kihwal Lee, Wei-Chiu Chuang: can you please review.

          Show
          shahrs87 Rushabh S Shah added a comment - Kihwal Lee , Wei-Chiu Chuang : can you please review.

            People

            • Assignee:
              shahrs87 Rushabh S Shah
              Reporter:
              shahrs87 Rushabh S Shah
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development