Hadoop Map/Reduce
  1. Hadoop Map/Reduce
  2. MAPREDUCE-1183

Serializable job components: Mapper, Reducer, InputFormat, OutputFormat et al

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.21.0
    • Fix Version/s: None
    • Component/s: client
    • Labels:
      None

      Description

      Currently the Map-Reduce framework uses Configuration to pass information about the various aspects of a job such as Mapper, Reducer, InputFormat, OutputFormat, OutputCommitter etc. and application developers use org.apache.hadoop.mapreduce.Job.set*Class apis to set them at job-submission time:

      Job.setMapperClass(IdentityMapper.class);
      Job.setReducerClass(IdentityReducer.class);
      Job.setInputFormatClass(TextInputFormat.class);
      Job.setOutputFormatClass(TextOutputFormat.class);
      ...
      

      The proposal is that we move to a model where end-users interact with org.apache.hadoop.mapreduce.Job via actual objects which are then serialized by the framework:

      Job.setMapper(new IdentityMapper());
      Job.setReducer(new IdentityReducer());
      Job.setInputFormat(new TextInputFormat("in"));
      Job.setOutputFormat(new TextOutputFormat("out"));
      ...
      

        Issue Links

          Activity

            People

            • Assignee:
              Owen O'Malley
              Reporter:
              Arun C Murthy
            • Votes:
              3 Vote for this issue
              Watchers:
              31 Start watching this issue

              Dates

              • Created:
                Updated:

                Development