Thanks to Arkady, here are the requirements for such a tool.
Here is the outline of the functionality.
Please comment – does this meet your needs, what is missing, etc. This will help to get this small tool right.
The purpose of the tool:
given a DFS directory with N part files, produce a DFS directory with M part files with content equivalent to the original one.
Optionally, the tool will also compress the data in a way that is transparent to MapReduce jobs.
What is "Equivalent content"?
There are three cases:
- records are independent and the order does not matter
- records are totally ordered (the keys are ordered in each part file, and all the keys in part-i are "less" than those in part-i+1)
- records are ordered within each shard (part-file), and this order is important
In the second and third cases the records with the same key should be in the same shard (part-file). This also so may be required in case 1, too.
The first two cases allow just to concatenate the shards into larger ones.
In the third case, the shards need to be merged according to the keys order.
The command will look like
-input input dfs directory path (required)
-output output dfs directory path (by default – replace the input)
-nshards the number of shards (part files) in the output
-shardsize the approximate desired size of a shard in the output
only of -nshards and -shardsize should be specified
default – one shard
-order [yes|no] optional; default – "no"
"yes" corresponds to case three (this may require supplying a key comparison method)
-compress [gzip|zlib|lzo] if the option specified with no value,
the tool will pick the compression method itself
It probably be implemented as a map-reduce job. (Map-only for cases 1 and 2)