Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6758

block writer should pass the expected block size to DataXceiverServer

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.4.1
    • 2.6.0
    • datanode, hdfs-client
    • None
    • Reviewed

    Description

      DataXceiver initializes the block size to the default block size for the cluster. This size is later used by the FsDatasetImpl when applying VolumeChoosingPolicy.

          block.setNumBytes(dataXceiverServer.estimateBlockSize);
      

      where

        /**
         * We need an estimate for block size to check if the disk partition has
         * enough space. For now we set it to be the default block size set
         * in the server side configuration, which is not ideal because the
         * default block size should be a client-size configuration. 
         * A better solution is to include in the header the estimated block size,
         * i.e. either the actual block size or the default block size.
         */
        final long estimateBlockSize;
      

      In most cases the writer can just pass the maximum expected block size to the DN instead of having to use the cluster default.

      Attachments

        1. HDFS-6758.01.patch
          15 kB
          Arpit Agarwal
        2. HDFS-6758.02.patch
          7 kB
          Arpit Agarwal

        Issue Links

          Activity

            cmccabe Colin McCabe added a comment -

            I apologize if this is a dumb question, but how would the writer pass the expected block size to the DN? Via datatransferprotocol?

            cmccabe Colin McCabe added a comment - I apologize if this is a dumb question, but how would the writer pass the expected block size to the DN? Via datatransferprotocol?
            arp Arpit Agarwal added a comment -

            Correct.

            Attached patch adds a new optional parameter to DataTransferProtocol.writeBlock - expectedBlockLength. If it is not passed by the writer then the DataNode uses the default block side from the server-side configuration as fallback.

            arp Arpit Agarwal added a comment - Correct. Attached patch adds a new optional parameter to DataTransferProtocol.writeBlock - expectedBlockLength. If it is not passed by the writer then the DataNode uses the default block side from the server-side configuration as fallback.
            hadoopqa Hadoop QA added a comment -

            -1 overall. Here are the results of testing the latest attachment
            http://issues.apache.org/jira/secure/attachment/12658359/HDFS-6758.01.patch
            against trunk revision .

            +1 @author. The patch does not contain any @author tags.

            +1 tests included. The patch appears to include 3 new or modified test files.

            +1 javac. The applied patch does not increase the total number of javac compiler warnings.

            +1 javadoc. There were no new javadoc warning messages.

            +1 eclipse:eclipse. The patch built with eclipse:eclipse.

            +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

            +1 release audit. The applied patch does not increase the total number of release audit warnings.

            -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

            org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
            org.apache.hadoop.TestGenericRefresh
            org.apache.hadoop.TestRefreshCallQueue
            org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery

            +1 contrib tests. The patch passed contrib unit tests.

            Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7484//testReport/
            Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7484//console

            This message is automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12658359/HDFS-6758.01.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.TestGenericRefresh org.apache.hadoop.TestRefreshCallQueue org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7484//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7484//console This message is automatically generated.
            arp Arpit Agarwal added a comment - - edited

            All test failures are unrelated.

            1. TestPipelinesFailover - HDFS-6694
            2. TestBlockRecovery, TestGenericRefresh, TestRefreshCallQueue - Filed HDFS-6768
            arp Arpit Agarwal added a comment - - edited All test failures are unrelated. TestPipelinesFailover - HDFS-6694 TestBlockRecovery, TestGenericRefresh, TestRefreshCallQueue - Filed HDFS-6768
            szetszwo Tsz-wo Sze added a comment -

            I think we do not need to change OpWriteBlockProto since the header (BaseHeaderProto) already has the numBytes.

            szetszwo Tsz-wo Sze added a comment - I think we do not need to change OpWriteBlockProto since the header (BaseHeaderProto) already has the numBytes.
            arp Arpit Agarwal added a comment -

            Good idea Nicholas. It simplifies the patch quite a bit. Updated patch.

            arp Arpit Agarwal added a comment - Good idea Nicholas. It simplifies the patch quite a bit. Updated patch.
            szetszwo Tsz-wo Sze added a comment -

            +1 patch looks good.

            szetszwo Tsz-wo Sze added a comment - +1 patch looks good.
            hadoopqa Hadoop QA added a comment -

            -1 overall. Here are the results of testing the latest attachment
            http://issues.apache.org/jira/secure/attachment/12663155/HDFS-6758.02.patch
            against trunk revision .

            +1 @author. The patch does not contain any @author tags.

            +1 tests included. The patch appears to include 1 new or modified test files.

            +1 javac. The applied patch does not increase the total number of javac compiler warnings.

            +1 javadoc. There were no new javadoc warning messages.

            +1 eclipse:eclipse. The patch built with eclipse:eclipse.

            +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

            -1 release audit. The applied patch generated 3 release audit warnings.

            -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

            org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
            org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
            org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
            org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport

            +1 contrib tests. The patch passed contrib unit tests.

            Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//testReport/
            Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
            Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//console

            This message is automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663155/HDFS-6758.02.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 3 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//console This message is automatically generated.
            arp Arpit Agarwal added a comment -

            Thanks Nicholas.

            The test failures look unrelated, all tests pass for me locally. Will commit it shortly.

            arp Arpit Agarwal added a comment - Thanks Nicholas. The test failures look unrelated, all tests pass for me locally. Will commit it shortly.
            hudson Hudson added a comment -

            FAILURE: Integrated in Hadoop-trunk-Commit #6091 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6091/)
            HDFS-6758. Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275)

            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6091 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6091/ ) HDFS-6758 . Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment -

            FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/)
            HDFS-6758. Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275)

            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/ ) HDFS-6758 . Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment -

            FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/)
            HDFS-6758. Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275)

            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/ ) HDFS-6758 . Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment -

            SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/)
            HDFS-6758. Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275)

            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
            • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
            hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/ ) HDFS-6758 . Block writer should pass the expected block size to DataXceiverServer (Arpit Agarwal) (arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619275 ) /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java

            People

              arp Arpit Agarwal
              arp Arpit Agarwal
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: