Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-2021

TestWriteRead failed with inconsistent visible length of a file

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.23.0
    • Component/s: datanode
    • Labels:
      None
    • Environment:

      Linux RHEL5

    • Hadoop Flags:
      Reviewed

      Description

      The junit test failed when iterates a number of times with larger chunk size on Linux. Once a while, the visible number of bytes seen by a reader is slightly less than what was supposed to be.

      When run with the following parameter, it failed more often on Linux ( as reported by John George) than my Mac:
      private static final int WR_NTIMES = 300;
      private static final int WR_CHUNK_SIZE = 10000;

      Adding more debugging output to the source, this is a sample of the output:
      Caused by: java.io.IOException: readData mismatch in byte read: expected=2770000 ; got 2765312
      at org.apache.hadoop.hdfs.TestWriteRead.readData(TestWriteRead.java:141)

      1. HDFS-2021-2.patch
        2 kB
        John George
      2. HDFS-2021.patch
        1 kB
        John George

        Activity

          People

          • Assignee:
            John George
            Reporter:
            CW Chung
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development