Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1838

Files created with an pre-0.15 gets blocksize as zero, causing performance degradation

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.15.0
    • Fix Version/s: 0.15.0
    • Component/s: None
    • Labels:
      None

      Description

      HADOOP-1656 introduced the support for storing block size persistently as inode metadata. Previously, if the file has only one block then it was not possible to accurately determine the blocksize that the application has requested at file-creation time.

      The upgrade of an older layout to the new layout kept the blocksize as zero for single-block files that were upgraded to the new layout. This was done to indicate the DFS really does not know the "true" blocksize of this file. This caused map-reduce to determine that a split is 1 byte in length!

        Attachments

        1. blockSizeZero.patch
          5 kB
          dhruba borthakur

          Issue Links

            Activity

              People

              • Assignee:
                dhruba dhruba borthakur
                Reporter:
                dhruba dhruba borthakur
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: