Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4178

shell scripts should not close stderr

VotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.2-alpha
    • Fix Version/s: 2.0.3-alpha
    • Component/s: scripts
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The start-dfs.sh and stop-dfs.sh scripts close stderr for some subprocesses using the construct

      2>&-

      This is dangerous because child processes started up under this scenario will re-use filedescriptor 2 for opened files. Since libc and many other codepaths assume that filedescriptor 2 can be written to in error conditions, this can potentially result in data corruption.

      Much better to redirect stderr using the construct 2>/dev/null.

        Attachments

        1. hdfs4178.txt
          1 kB
          Andy Isaacson

          Activity

            People

            • Assignee:
              adi2 Andy Isaacson
              Reporter:
              adi2 Andy Isaacson

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment