Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6758

block writer should pass the expected block size to DataXceiverServer

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.1
    • Fix Version/s: 2.6.0
    • Component/s: datanode, hdfs-client
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      DataXceiver initializes the block size to the default block size for the cluster. This size is later used by the FsDatasetImpl when applying VolumeChoosingPolicy.

          block.setNumBytes(dataXceiverServer.estimateBlockSize);
      

      where

        /**
         * We need an estimate for block size to check if the disk partition has
         * enough space. For now we set it to be the default block size set
         * in the server side configuration, which is not ideal because the
         * default block size should be a client-size configuration. 
         * A better solution is to include in the header the estimated block size,
         * i.e. either the actual block size or the default block size.
         */
        final long estimateBlockSize;
      

      In most cases the writer can just pass the maximum expected block size to the DN instead of having to use the cluster default.

        Attachments

        1. HDFS-6758.01.patch
          15 kB
          Arpit Agarwal
        2. HDFS-6758.02.patch
          7 kB
          Arpit Agarwal

        Issue Links

          Activity

            People

            • Assignee:
              arp Arpit Agarwal
              Reporter:
              arp Arpit Agarwal

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment