Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6114

Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • 2.3.0, 2.4.0
    • 2.6.0
    • datanode
    • None

    Description

      1. BlockPoolSliceScanner#scan() will not return until all the blocks are scanned.
      2. If the blocks (with size in several MBs) to datanode are written continuously
      then one iteration of BlockPoolSliceScanner#scan() will be continously scanning the blocks
      3. These blocks will be deleted after some time (enough to get block scanned)
      4. As Block Scanning is throttled, So verification of all blocks will take so much time.
      5. Rolling will never happen, so even though the total number of blocks in datanode doesn't increases, entries ( which contains stale entries of deleted blocks) in dncp_block_verification.log.curr continuously increases leading to huge size.

      In one of our env, it grown more than 1TB where total number of blocks were only ~45k.

      Attachments

        1. HDFS-6114.patch
          3 kB
          Vinayakumar B
        2. HDFS-6114.patch
          4 kB
          Vinayakumar B
        3. HDFS-6114.patch
          5 kB
          Vinayakumar B
        4. HDFS-6114.patch
          5 kB
          Vinayakumar B

        Activity

          People

            vinayakumarb Vinayakumar B
            vinayakumarb Vinayakumar B
            Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: