Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-18216

Document "io.file.buffer.size" must be greater than zero

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 3.4.0
    • 3.4.0
    • io
    • Reviewed
    • Patch

    Description

      when the configuration file in the "io.file.buffer.size" field is set to a value less than or equal to zero, hdfs can start normally, but read and write data will have problems.

      When the value is less than zero, the shell will throw the following exception:

      hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
      -cat: Fatal internal error
      java.lang.NegativeArraySizeException: -4096
              at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:93)
              at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
              at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
              at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
              at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
              at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
              at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
              at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
              at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
              at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
              at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
              at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
              at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
              at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
              at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)

      When the value is equal to zero, the shell command will always block

      hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
      ^Z
      [2]+  Stopped                 ./hdfs dfs -cat mapred

      The description of the configuration file is not clear enough, it may make people think that set to 0 to enter the non-blocking mode.
       

      <property>   
          <name>io.file.buffer.size</name>   
          <value>4096</value>   
          <description>The size of buffer for use in sequence files.   
          The size of this buffer should probably be a multiple of hardware   
          page size (4096 on Intel x86), and it determines how much data is   
          buffered during read and write operations.</description> 
      </property>

       
      Considering that this value is uesd by hdfs and mapreduce frequently, we should make this value must be a number greater than zero.

       

      Attachments

        Issue Links

          Activity

            People

              fujx ECFuzz
              fujx ECFuzz
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 1h 50m
                  1h 50m