Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-328

"fs -setrep" should have better error message

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: namenode
    • Labels:

      Description

      When the replication # is larger than dfs.replication.max (defined in conf), "fs -setrep" shows a meaningless error message. For example,

      //dfs.replication.max is 512
      
      $ hadoop fs -setrep 1000 r.txt
      setrep: java.io.IOException: file /user/tsz/r.txt.
      

        Activity

        Hide
        Nigel Daley added a comment -

        The fix for this should include some new test cases added to src/test/org/apache/hadoop/cli/testConf.xml (which is run by TestCLI)

        Show
        Nigel Daley added a comment - The fix for this should include some new test cases added to src/test/org/apache/hadoop/cli/testConf.xml (which is run by TestCLI)
        Hide
        Allen Wittenauer added a comment -

        Just verified that in 3.x trunk this error message is still broken:

        $hdfs dfs -setrep 10000 /hosts
        setrep: file /hosts.
        
        Show
        Allen Wittenauer added a comment - Just verified that in 3.x trunk this error message is still broken: $hdfs dfs -setrep 10000 /hosts setrep: file /hosts.

          People

          • Assignee:
            Ravi Phulari
            Reporter:
            Tsz Wo Nicholas Sze
          • Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:

              Development