Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-7380

unsteady and slow performance when writing to file with block size >2GB

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.4.0
    • None
    • None
    • None

    Description

      Appending to a large file with block size > 2GB can lead to periods of really poor performance (4X slower than optimal). I found this issue when looking at Accmulo write performance in ACCUMULO-3303. I wrote a small test application to isolate this performance down to some basic API calls (to be attached). A description of the execution can be found here: https://issues.apache.org/jira/browse/ACCUMULO-3303?focusedCommentId=14202830&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14202830

      The specific hadoop version was as follows:

      [root@n1 ~]# hadoop version
      Hadoop 2.4.0.2.1.2.0-402
      Subversion git@github.com:hortonworks/hadoop.git -r 9e5db004df1a751e93aa89b42956c5325f3a4482
      Compiled by jenkins on 2014-04-27T22:28Z
      Compiled with protoc 2.5.0
      From source with checksum 9e788148daa5dd7934eb468e57e037b5
      This command was run using /usr/lib/hadoop/hadoop-common-2.4.0.2.1.2.0-402.jar
      

      Attachments

        1. BenchmarkWrites.java
          3 kB
          Adam Fuchs

        Issue Links

          Activity

            People

              Unassigned Unassigned
              afuchs Adam Fuchs
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated: