Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-16127

Improper pipeline close recovery causes a permanent write failure or data loss.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.4.0, 3.3.2
    • 3.4.0, 3.2.3, 3.3.2
    • hdfs
    • None

    Description

      When a block is being closed, the data streamer in the client waits for the final ACK to be delivered. If an exception is received during this wait, the close is retried. This assumption has become invalid by HDFS-15813, resulting in permanent write failures in some close error cases involving slow nodes. There are also less frequent cases of data loss.

      Attachments

        1. HDFS-16127.patch
          1 kB
          Kihwal Lee

        Issue Links

          Activity

            People

              kihwal Kihwal Lee
              kihwal Kihwal Lee
              Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: