Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-3678

Avoid spurious "DataXceiver: java.io.IOException: Connection reset by peer" errors in DataNode log

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.17.0
    • 0.17.2
    • None
    • None
    • Reviewed
    • Avoid spurious exceptions logged at DataNode when clients read from DFS.

    Description

      When a client reads data using read(), it closes the sockets after it is done. Often it might not read till the end of a block. The datanode on the other side keeps writing data until the client connection is closed or end of the block is reached. If the client does not read till the end of the block, Datanode writes an error message and stack trace to the datanode log. It should not. This is not an error and it just pollutes the log and confuses the user.

      Attachments

        1. HADOOP-3678-branch-17.patch
          1 kB
          Raghu Angadi
        2. HADOOP-3678.patch
          3 kB
          Raghu Angadi
        3. HADOOP-3678.patch
          3 kB
          Raghu Angadi

        Issue Links

          Activity

            People

              rangadi Raghu Angadi
              rangadi Raghu Angadi
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: