Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12833

Distcp : Update the usage of delete option for dependency with update and overwrite option

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.8.0
    • 3.1.0, 2.10.0, 2.9.1, 2.8.4
    • distcp, hdfs
    • None

    Description

      Basically Delete option applicable only with update or overwrite options. I tried as per usage message am getting the bellow exception.

      bin:> ./hadoop distcp -delete /Dir1/distcpdir /Dir/distcpdir5
      2017-11-17 20:48:09,828 ERROR tools.DistCp: Invalid arguments:
      java.lang.IllegalArgumentException: Delete missing is applicable only with update or overwrite options
              at org.apache.hadoop.tools.DistCpOptions$Builder.validate(DistCpOptions.java:528)
              at org.apache.hadoop.tools.DistCpOptions$Builder.build(DistCpOptions.java:487)
              at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:233)
              at org.apache.hadoop.tools.DistCp.run(DistCp.java:141)
              at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
              at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
      Invalid arguments: Delete missing is applicable only with update or overwrite options
      usage: distcp OPTIONS [source_path...] <target_path>
                    OPTIONS
       -append                       Reuse existing data in target files and
                                     append new data to them if possible
       -async                        Should distcp execution be blocking
       -atomic                       Commit all changes or none
       -bandwidth <arg>              Specify bandwidth per map in MB, accepts
                                     bandwidth as a fraction.
       -blocksperchunk <arg>         If set to a positive value, fileswith more
                                     blocks than this value will be split into
                                     chunks of <blocksperchunk> blocks to be
                                     transferred in parallel, and reassembled on
                                     the destination. By default,
                                     <blocksperchunk> is 0 and the files will be
                                     transmitted in their entirety without
                                     splitting. This switch is only applicable
                                     when the source file system implements
                                     getBlockLocations method and the target
                                     file system implements concat method
       -copybuffersize <arg>         Size of the copy buffer to use. By default
                                     <copybuffersize> is 8192B.
       -delete                       Delete from target, files missing in source
       -diff <arg>                   Use snapshot diff report to identify the
                                     difference between source and target
      

      Even in Document also it's not updated proper usage.

      Attachments

        1. HDFS-12833.001.patch
          3 kB
          usharani
        2. HDFS-12833.patch
          3 kB
          usharani
        3. HDFS-12833-branch-2.001.patch
          3 kB
          usharani
        4. HDFS-12833-branch-2.committed.patch
          3 kB
          Surendra Singh Lilhore

        Activity

          People

            peruguusha usharani
            Harsha1206 Harshakiran Reddy
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: