Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.8.0
    • Fix Version/s: 0.9.0
    • Component/s: None
    • Labels:
      None

      Description

      After upgrading to 0.8.0, some of my script applications stopped to work properly, seemingly because of hadoop dfs utility returning 0 exit code when it should not (kind of revival of hadoop-488, with a different cause).

      dfs -cat and dfs -rm always return exit code 0, even for non-existing files. The former can be traced back to the fact that DFSShell's 'run' method calls a 'doall' method without passing on the exit code ('doall' catches its own exceptions and returns an exit code). The latter occurs because the return code of the DFSClient delete method is only used in DFS Shell to print different messages without affecting exit code.

      There might be more inconsistent behavior of the dfs shell. Hadoop dfs command line should return 0 signaling success exactly when the corresponding unix command returns 0 (or at least it should be related to success whatever this means in a documented manner).

      I also would recommend to use a kind of regression test to prevent that this gets broken again.

        Attachments

          Activity

            People

            • Assignee:
              dhruba dhruba borthakur
              Reporter:
              ckunz Christian Kunz
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: