Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.18.0
    • Component/s: None
    • Labels:
      None

      Description

      Utility will collapse the contents of a directory into a small number of files.

        Issue Links

          Activity

          Hide
          Milind Bhandarkar added a comment -

          Thanks to Arkady, here are the requirements for such a tool.

          Here is the outline of the functionality.
          Please comment – does this meet your needs, what is missing, etc. This will help to get this small tool right.

          The purpose of the tool:
          given a DFS directory with N part files, produce a DFS directory with M part files with content equivalent to the original one.
          Optionally, the tool will also compress the data in a way that is transparent to MapReduce jobs.

          What is "Equivalent content"?
          There are three cases:

          • records are independent and the order does not matter
          • records are totally ordered (the keys are ordered in each part file, and all the keys in part-i are "less" than those in part-i+1)
          • records are ordered within each shard (part-file), and this order is important
            In the second and third cases the records with the same key should be in the same shard (part-file). This also so may be required in case 1, too.
            The first two cases allow just to concatenate the shards into larger ones.
            In the third case, the shards need to be merged according to the keys order.

          The command will look like

          dfs_compact
          -input input dfs directory path (required)
          -output output dfs directory path (by default – replace the input)
          -nshards the number of shards (part files) in the output
          -shardsize the approximate desired size of a shard in the output
          only of -nshards and -shardsize should be specified
          default – one shard
          -order [yes|no] optional; default – "no"
          "yes" corresponds to case three (this may require supplying a key comparison method)
          -compress [gzip|zlib|lzo] if the option specified with no value,
          the tool will pick the compression method itself

          It probably be implemented as a map-reduce job. (Map-only for cases 1 and 2)

          Show
          Milind Bhandarkar added a comment - Thanks to Arkady, here are the requirements for such a tool. Here is the outline of the functionality. Please comment – does this meet your needs, what is missing, etc. This will help to get this small tool right. The purpose of the tool: given a DFS directory with N part files, produce a DFS directory with M part files with content equivalent to the original one. Optionally, the tool will also compress the data in a way that is transparent to MapReduce jobs. What is "Equivalent content"? There are three cases: records are independent and the order does not matter records are totally ordered (the keys are ordered in each part file, and all the keys in part-i are "less" than those in part-i+1) records are ordered within each shard (part-file), and this order is important In the second and third cases the records with the same key should be in the same shard (part-file). This also so may be required in case 1, too. The first two cases allow just to concatenate the shards into larger ones. In the third case, the shards need to be merged according to the keys order. The command will look like dfs_compact -input input dfs directory path (required) -output output dfs directory path (by default – replace the input) -nshards the number of shards (part files) in the output -shardsize the approximate desired size of a shard in the output only of -nshards and -shardsize should be specified default – one shard -order [yes|no] optional; default – "no" "yes" corresponds to case three (this may require supplying a key comparison method) -compress [gzip|zlib|lzo] if the option specified with no value, the tool will pick the compression method itself It probably be implemented as a map-reduce job. (Map-only for cases 1 and 2)
          Hide
          Mahadev konar added a comment -

          editing this issue to be just a compaction utility.

          Show
          Mahadev konar added a comment - editing this issue to be just a compaction utility.
          Hide
          Robert Chansler added a comment -

          Mahadev's work (HADOOP-3307) is close enough. There is no reason for this to be open.

          Show
          Robert Chansler added a comment - Mahadev's work ( HADOOP-3307 ) is close enough. There is no reason for this to be open.

            People

            • Assignee:
              Robert Chansler
              Reporter:
              Robert Chansler
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development