Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1926

Design/implement a set of compression benchmarks for the map-reduce framework

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.15.0
    • None
    • None

    Description

      It would be nice to benchmark various compression codecs for use in the hadoop (existing codecs like zlib, lzo and in-future bzip2 etc.) and run these along with our nightlies or weeklies.

      Here are some steps:
      a) Fix HADOOP-1851 ( Map output compression codec cannot be set independently of job output compression codec)
      b) Implement a random-text-writer along the lines of examples/randomwriter to generate large amounts of synthetic textual data for use in sort. One way to do this is to pick a word randomly from /usr/share/dict/words till we get enough bytes per map. To be safe, we could store an array of Strings of a snap-shot of the words in examples/RandomTextWriter.java.
      c) Take a dump of wikipedia (http://download.wikimedia.org/enwiki/) and/or the ebooks from Project Gutenberg (http://www.gutenberg.org/MIRRORS.ALL) and use them as non-synthetic data to run sort/wordcount against.

      For both b) and c) we should setup nightly/weekly benchmark runs with different codecs for reduce-outputs and map-outputs (shuffle) and track each.

      Thoughts?

      Attachments

        1. HADOOP-1926_1_20071002.patch
          51 kB
          Arun Murthy
        2. HADOOP-1926_2_20071002.patch
          51 kB
          Arun Murthy

        Issue Links

          Activity

            People

              acmurthy Arun Murthy
              acmurthy Arun Murthy
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: