Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6114

Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 2.3.0, 2.4.0
    • Fix Version/s: 2.6.0
    • Component/s: datanode
    • Labels:
      None
    • Target Version/s:

      Description

      1. BlockPoolSliceScanner#scan() will not return until all the blocks are scanned.
      2. If the blocks (with size in several MBs) to datanode are written continuously
      then one iteration of BlockPoolSliceScanner#scan() will be continously scanning the blocks
      3. These blocks will be deleted after some time (enough to get block scanned)
      4. As Block Scanning is throttled, So verification of all blocks will take so much time.
      5. Rolling will never happen, so even though the total number of blocks in datanode doesn't increases, entries ( which contains stale entries of deleted blocks) in dncp_block_verification.log.curr continuously increases leading to huge size.

      In one of our env, it grown more than 1TB where total number of blocks were only ~45k.

      1. HDFS-6114.patch
        5 kB
        Vinayakumar B
      2. HDFS-6114.patch
        5 kB
        Vinayakumar B
      3. HDFS-6114.patch
        4 kB
        Vinayakumar B
      4. HDFS-6114.patch
        3 kB
        Vinayakumar B

        Activity

        Hide
        vinayrpet Vinayakumar B added a comment -

        My proposal is to apply some limit to number of blocks scanned per iteration of BlockPoolSliceScanner#scan(). So rolling will remove stale entries from verification logs.

        Any thoughts..?

        Show
        vinayrpet Vinayakumar B added a comment - My proposal is to apply some limit to number of blocks scanned per iteration of BlockPoolSliceScanner#scan() . So rolling will remove stale entries from verification logs. Any thoughts..?
        Hide
        vinayrpet Vinayakumar B added a comment -

        Attaching the proposed patch. Please review

        Show
        vinayrpet Vinayakumar B added a comment - Attaching the proposed patch. Please review
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12635270/HDFS-6114.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.TestSafeMode

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6422//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6422//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635270/HDFS-6114.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestSafeMode +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6422//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6422//console This message is automatically generated.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Hi,
        Can someone take a look at the patch?

        Thanks in advance.

        Show
        vinayrpet Vinayakumar B added a comment - Hi, Can someone take a look at the patch? Thanks in advance.
        Hide
        cmccabe Colin P. McCabe added a comment -

        Hi Vinayakumar,

        Good find.

        This proposed configuration parameter seems difficult to tune. Most people don't keep careful track of how many blocks they scan each time the scanner runs, and would not have a clear idea of how to adjust it. I can also see some people setting this incorrectly and ending up with a cluster where the block scanner falls further and further behind, and 99% of the blocks on the DN are never scanned.

        Rather than introducing another configuration parameter, how about simply not adding the new blocks to the existing scan? Perhaps BlockPoolScanner#addBlock could add the block to a secondary map that would get dumped into the main map when the current scan had terminated. This way, scan() will always return at some point, when the current blocks are done, even if new blocks have been created. The main point of the block scanner is to scan older blocks we haven't touched in weeks, not to re-read stuff we just wrote, so this seems like a better behavior anyway.

        Show
        cmccabe Colin P. McCabe added a comment - Hi Vinayakumar, Good find. This proposed configuration parameter seems difficult to tune. Most people don't keep careful track of how many blocks they scan each time the scanner runs, and would not have a clear idea of how to adjust it. I can also see some people setting this incorrectly and ending up with a cluster where the block scanner falls further and further behind, and 99% of the blocks on the DN are never scanned. Rather than introducing another configuration parameter, how about simply not adding the new blocks to the existing scan? Perhaps BlockPoolScanner#addBlock could add the block to a secondary map that would get dumped into the main map when the current scan had terminated. This way, scan() will always return at some point, when the current blocks are done, even if new blocks have been created. The main point of the block scanner is to scan older blocks we haven't touched in weeks, not to re-read stuff we just wrote, so this seems like a better behavior anyway.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Thanks colin, Sorry for the late response. I will try implementing your suggestion and post a new patch soon.

        Show
        vinayrpet Vinayakumar B added a comment - Thanks colin, Sorry for the late response. I will try implementing your suggestion and post a new patch soon.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Attaching the patch as per Colin P. McCabe suggestion

        Show
        vinayrpet Vinayakumar B added a comment - Attaching the patch as per Colin P. McCabe suggestion
        Hide
        cmccabe Colin P. McCabe added a comment -

        I don't really see a good reason to separate delBlockInfo and delNewBlockInfo. It seems like this could just lead to scenarios where we think we're deleting a block but it pops back up (because we deleted, but did not delete new)

        I guess maybe it makes sense to separate addBlockInfo from addNewBlockInfo, just because there are places in the setup code where we're willing to add stuff directly to blockInfoSet. Even in that case, I would argue it might be easier to call addNewBlockInfo and then later roll all the newBlockInfoSet items into blockInfoSet. The problem is that having both functions creates confusion and increase the chance that someone will add an incorrect call to the wrong one later on in another change.

          private final SortedSet<BlockScanInfo> blockInfoSet
              = new TreeSet<BlockScanInfo>(BlockScanInfo.LAST_SCAN_TIME_COMPARATOR);
        
          private final Set<BlockScanInfo> newBlockInfoSet =
              new HashSet<BlockScanInfo>();
        

        It seems like a bad idea to use BlockScanInfo.LAST_SCAN_TIME_COMPARATOR for blockInfoSet, but BlockScanInfo#hashCode (i.e. the HashSet strategy) for newBlockInfoSet. Let's just use a SortedSet for both so we don't have to ponder any possible discrepancies between the comparator and the hash function. Another problem with HashSet (compared with TreeSet) is that it never shrinks down after enlarging... a bad property for a temporary holding area.

        Show
        cmccabe Colin P. McCabe added a comment - I don't really see a good reason to separate delBlockInfo and delNewBlockInfo . It seems like this could just lead to scenarios where we think we're deleting a block but it pops back up (because we deleted, but did not delete new) I guess maybe it makes sense to separate addBlockInfo from addNewBlockInfo , just because there are places in the setup code where we're willing to add stuff directly to blockInfoSet . Even in that case, I would argue it might be easier to call addNewBlockInfo and then later roll all the newBlockInfoSet items into blockInfoSet . The problem is that having both functions creates confusion and increase the chance that someone will add an incorrect call to the wrong one later on in another change. private final SortedSet<BlockScanInfo> blockInfoSet = new TreeSet<BlockScanInfo>(BlockScanInfo.LAST_SCAN_TIME_COMPARATOR); private final Set<BlockScanInfo> newBlockInfoSet = new HashSet<BlockScanInfo>(); It seems like a bad idea to use BlockScanInfo.LAST_SCAN_TIME_COMPARATOR for blockInfoSet, but BlockScanInfo#hashCode (i.e. the HashSet strategy) for newBlockInfoSet . Let's just use a SortedSet for both so we don't have to ponder any possible discrepancies between the comparator and the hash function. Another problem with HashSet (compared with TreeSet ) is that it never shrinks down after enlarging... a bad property for a temporary holding area.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12645954/HDFS-6114.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7340//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7340//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645954/HDFS-6114.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7340//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7340//console This message is automatically generated.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Thanks Colin P. McCabe, I will recheck all your points and post an updated patch soon.

        Show
        vinayrpet Vinayakumar B added a comment - Thanks Colin P. McCabe , I will recheck all your points and post an updated patch soon.
        Hide
        vinayrpet Vinayakumar B added a comment -

        I don't really see a good reason to separate delBlockInfo and delNewBlockInfo. It seems like this could just lead to scenarios where we think we're deleting a block but it pops back up (because we deleted, but did not delete new)

        Here, both are working on different set. delBlockInfo is used in someother places as well while updating the scantime and resort the blockInfoSet.
        delNewBlockInfo is only needs to be called while deleting the block itself, as intermediate updates will not happen on this set data.
        So delBlockInfo and delNewBlockInfo serves separate purposes and both are required.

        I guess maybe it makes sense to separate addBlockInfo from addNewBlockInfo, just because there are places in the setup code where we're willing to add stuff directly to blockInfoSet. Even in that case, I would argue it might be easier to call addNewBlockInfo and then later roll all the newBlockInfoSet items into blockInfoSet. The problem is that having both functions creates confusion and increase the chance that someone will add an incorrect call to the wrong one later on in another change.

        As I am seeing, both these methods are private and acts on different sets. since method name itself suggests addNewBlockInfo is only for the new blocks. I am not seeing any confusion here.

        It seems like a bad idea to use BlockScanInfo.LAST_SCAN_TIME_COMPARATOR for blockInfoSet, but BlockScanInfo#hashCode (i.e. the HashSet strategy) for newBlockInfoSet. Let's just use a SortedSet for both so we don't have to ponder any possible discrepancies between the comparator and the hash function.

        blockInfoSet is required to be sorted based on the lastScanTime, as oldest scanned block will be picked for scanning, which will be the first element in this set always. BlockScanInfo.LAST_SCAN_TIME_COMPARATOR is used because BlockScanInfo#hashCode() is default which will sort based on the blockId rather than scan time.
        Do you suggest me to update this hashCode() itself?

        Another problem with HashSet (compared with TreeSet) is that it never shrinks down after enlarging... a bad property for a temporary holding area

        Yes, this I agree, will update in the next patch.

        Show
        vinayrpet Vinayakumar B added a comment - I don't really see a good reason to separate delBlockInfo and delNewBlockInfo. It seems like this could just lead to scenarios where we think we're deleting a block but it pops back up (because we deleted, but did not delete new) Here, both are working on different set. delBlockInfo is used in someother places as well while updating the scantime and resort the blockInfoSet. delNewBlockInfo is only needs to be called while deleting the block itself, as intermediate updates will not happen on this set data. So delBlockInfo and delNewBlockInfo serves separate purposes and both are required. I guess maybe it makes sense to separate addBlockInfo from addNewBlockInfo, just because there are places in the setup code where we're willing to add stuff directly to blockInfoSet. Even in that case, I would argue it might be easier to call addNewBlockInfo and then later roll all the newBlockInfoSet items into blockInfoSet. The problem is that having both functions creates confusion and increase the chance that someone will add an incorrect call to the wrong one later on in another change. As I am seeing, both these methods are private and acts on different sets. since method name itself suggests addNewBlockInfo is only for the new blocks. I am not seeing any confusion here. It seems like a bad idea to use BlockScanInfo.LAST_SCAN_TIME_COMPARATOR for blockInfoSet, but BlockScanInfo#hashCode (i.e. the HashSet strategy) for newBlockInfoSet. Let's just use a SortedSet for both so we don't have to ponder any possible discrepancies between the comparator and the hash function. blockInfoSet is required to be sorted based on the lastScanTime, as oldest scanned block will be picked for scanning, which will be the first element in this set always. BlockScanInfo.LAST_SCAN_TIME_COMPARATOR is used because BlockScanInfo#hashCode() is default which will sort based on the blockId rather than scan time. Do you suggest me to update this hashCode() itself? Another problem with HashSet (compared with TreeSet) is that it never shrinks down after enlarging... a bad property for a temporary holding area Yes, this I agree, will update in the next patch.
        Hide
        cmccabe Colin P. McCabe added a comment -

        blockInfoSet is required to be sorted based on the lastScanTime, as oldest scanned block will be picked for scanning, which will be the first element in this set always. BlockScanInfo.LAST_SCAN_TIME_COMPARATOR is used because BlockScanInfo#hashCode() is default which will sort based on the blockId rather than scan time. Do you suggest me to update this hashCode() itself?

        I was suggesting that you use a TreeSet or TreeMap with the same comparator as blockInfoSet. All the hash sets that I'm aware of do not shrink down after enlarging.

        So delBlockInfo and delNewBlockInfo serves separate purposes and both are required.

        I can write a version of the patch that only has one del function and only one add function. I am really reluctant to put in another set of add/del functions on top of what's already there, since I think it will make things hard to understand for people trying to modify this code later or backport this patch to other branches.

        Show
        cmccabe Colin P. McCabe added a comment - blockInfoSet is required to be sorted based on the lastScanTime, as oldest scanned block will be picked for scanning, which will be the first element in this set always. BlockScanInfo.LAST_SCAN_TIME_COMPARATOR is used because BlockScanInfo#hashCode() is default which will sort based on the blockId rather than scan time. Do you suggest me to update this hashCode() itself? I was suggesting that you use a TreeSet or TreeMap with the same comparator as blockInfoSet . All the hash sets that I'm aware of do not shrink down after enlarging. So delBlockInfo and delNewBlockInfo serves separate purposes and both are required. I can write a version of the patch that only has one del function and only one add function. I am really reluctant to put in another set of add/del functions on top of what's already there, since I think it will make things hard to understand for people trying to modify this code later or backport this patch to other branches.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Attached the updated patch
        1. Used TreeSet for newBlockInfoSet
        2. Merged add/del methods,

        Please review

        Show
        vinayrpet Vinayakumar B added a comment - Attached the updated patch 1. Used TreeSet for newBlockInfoSet 2. Merged add/del methods, Please review
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12656002/HDFS-6114.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
        org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
        org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
        org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
        org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7357//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7357//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656002/HDFS-6114.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7357//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7357//console This message is automatically generated.
        Hide
        cmccabe Colin P. McCabe added a comment -
          // add new blocks to scan in next iteration
          private synchronized void rollNewBlocksInfo() {
            for (BlockScanInfo newBlock : newBlockInfoSet) {
              blockInfoSet.add(newBlock);
            }
          }
        

        I think we need to clear the newBlockInfoSet here.

        +    boolean exists = newBlockInfoSet.remove(info);
        +    exists = exists || blockInfoSet.remove(info);
        

        I guess this is a nit, but I'd prefer just another "if" statement, to the || construct.

        Show
        cmccabe Colin P. McCabe added a comment - // add new blocks to scan in next iteration private synchronized void rollNewBlocksInfo() { for (BlockScanInfo newBlock : newBlockInfoSet) { blockInfoSet.add(newBlock); } } I think we need to clear the newBlockInfoSet here. + boolean exists = newBlockInfoSet.remove(info); + exists = exists || blockInfoSet.remove(info); I guess this is a nit, but I'd prefer just another "if" statement, to the || construct.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Updated patch with above comments. Please review

        Show
        vinayrpet Vinayakumar B added a comment - Updated patch with above comments. Please review
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12656200/HDFS-6114.patch
        against trunk revision .

        -1 patch. Trunk compilation may be broken.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7368//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656200/HDFS-6114.patch against trunk revision . -1 patch . Trunk compilation may be broken. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7368//console This message is automatically generated.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Seems like there was a compilation error in this build even before applying the patch. Later builds dont have that problem. So triggered the jenkins again.

        Show
        vinayrpet Vinayakumar B added a comment - Seems like there was a compilation error in this build even before applying the patch. Later builds dont have that problem. So triggered the jenkins again.
        Hide
        hadoopqa Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12656200/HDFS-6114.patch
        against trunk revision .

        +1 @author. The patch does not contain any @author tags.

        -1 tests included. The patch doesn't appear to include any new or modified tests.
        Please justify why no new tests are needed for this patch.
        Also please list what manual steps were performed to verify this patch.

        +1 javac. The applied patch does not increase the total number of javac compiler warnings.

        +1 javadoc. There were no new javadoc warning messages.

        +1 eclipse:eclipse. The patch built with eclipse:eclipse.

        +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

        +1 release audit. The applied patch does not increase the total number of release audit warnings.

        -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

        org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
        org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

        +1 contrib tests. The patch passed contrib unit tests.

        Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7373//testReport/
        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7373//console

        This message is automatically generated.

        Show
        hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12656200/HDFS-6114.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7373//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7373//console This message is automatically generated.
        Hide
        vinayrpet Vinayakumar B added a comment -

        Hi Colin P. McCabe, could you please take a look at the updated patch whenever you find time?
        Thanks in advance.

        Show
        vinayrpet Vinayakumar B added a comment - Hi Colin P. McCabe , could you please take a look at the updated patch whenever you find time? Thanks in advance.
        Hide
        cmccabe Colin P. McCabe added a comment -

        +1. Thanks, Vinayakumar.

        Show
        cmccabe Colin P. McCabe added a comment - +1. Thanks, Vinayakumar.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-trunk-Commit #5954 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5954/)
        HDFS-6114. Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #5954 (See https://builds.apache.org/job/Hadoop-trunk-Commit/5954/ ) HDFS-6114 . Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Hide
        vinayrpet Vinayakumar B added a comment -

        Thanks a lot Colin P. McCabe for reviews and commit.

        Show
        vinayrpet Vinayakumar B added a comment - Thanks a lot Colin P. McCabe for reviews and commit.
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Yarn-trunk #622 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/622/)
        HDFS-6114. Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #622 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/622/ ) HDFS-6114 . Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Hide
        hudson Hudson added a comment -

        FAILURE: Integrated in Hadoop-Hdfs-trunk #1814 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1814/)
        HDFS-6114. Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Show
        hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1814 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1814/ ) HDFS-6114 . Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Hide
        hudson Hudson added a comment -

        SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1841 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1841/)
        HDFS-6114. Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943)

        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
        • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
        Show
        hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1841 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1841/ ) HDFS-6114 . Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr (vinayakumarb via cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612943 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java

          People

          • Assignee:
            vinayrpet Vinayakumar B
            Reporter:
            vinayrpet Vinayakumar B
          • Votes:
            0 Vote for this issue
            Watchers:
            13 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development