Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3701

HDFS may miss the final block when reading a file opened for writing if one of the datanode is dead

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 1.0.3
    • Fix Version/s: 1.1.0
    • Component/s: hdfs-client
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      When the file is opened for writing, the DFSClient calls one of the datanode owning the last block to get its size. If this datanode is dead, the socket exception is shallowed and the size of this last block is equals to zero. This seems to be fixed on trunk, but I didn't find a related Jira. On 1.0.3, it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.

        Attachments

        1. HDFS-3701.patch
          11 kB
          Uma Maheswara Rao G
        2. HDFS-3701.ontopof.v1.patch
          2 kB
          Nicolas Liochon
        3. HDFS-3701.branch-1.v2.merged.patch
          11 kB
          Nicolas Liochon
        4. HDFS-3701.branch-1.v3.patch
          11 kB
          Uma Maheswara Rao G
        5. HDFS-3701.branch-1.v4.patch
          11 kB
          Uma Maheswara Rao G

          Issue Links

            Activity

              People

              • Assignee:
                nkeywal Nicolas Liochon
                Reporter:
                nkeywal Nicolas Liochon
              • Votes:
                1 Vote for this issue
                Watchers:
                16 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: