Hadoop Common
  1. Hadoop Common
  2. HADOOP-2085

Map-side joins on sorted, equally-partitioned datasets

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.16.0
    • Component/s: None
    • Labels:
      None

      Description

      Motivation

      Given a set of sorted datasets keyed with the same class and yielding equal
      partitions, it is possible to effect a join of those datasets prior to the
      map. This could save costs in re-partitioning, sorting, shuffling, and
      writing out data required in the general case.

      Interface

      The attached code offers the following interface to users of these classes.

      property required value
      mapred.join.expr yes Join expression to effect over input data
      mapred.join.keycomparator no WritableComparator class to use for comparing keys
      mapred.join.define.<ident> no Class mapped to identifier in join expression

      The join expression understands the following grammar:

      func ::= <ident>([<func>,]*<func>)
      func ::= tbl(<class>,"<path>");
      

      Operations included in this patch are partitioned into one of two types:
      join operations emitting tuples and "multi-filter" operations emitting a
      single value from (but not necessarily included in) a set of input values.
      For a given key, each operation will consider the cross product of all
      values for all sources at that node.

      Identifiers supported by default:

      identifier type description
      inner Join Full inner join
      outer Join Full outer join
      override MultiFilter For a given key, prefer values from the rightmost source

      A user of this class must set the InputFormat for the job to
      CompositeInputFormat and define a join expression accepted by the preceding
      grammar. For example, both of the following are acceptable:

      inner(tbl(org.apache.hadoop.mapred.SequenceFileInputFormat.class,
                "hdfs://host:8020/foo/bar"),
            tbl(org.apache.hadoop.mapred.SequenceFileInputFormat.class,
                "hdfs://host:8020/foo/baz"))
      
      outer(override(tbl(org.apache.hadoop.mapred.SequenceFileInputFormat.class,
                         "hdfs://host:8020/foo/bar"),
                     tbl(org.apache.hadoop.mapred.SequenceFileInputFormat.class,
                         "hdfs://host:8020/foo/baz")),
            tbl(org.apache.hadoop.mapred/SequenceFileInputFormat.class,
                "hdfs://host:8020/foo/rab"))
      

      CompositeInputFormat includes a handful of convenience methods to aid
      construction of these verbose statements.

      As in the second example, joins may be nested. Users may provide a
      comparator class in the mapred.join.keycomparator property to
      specify the ordering of their keys, or accept the default comparator as
      returned by WritableComparator.get(keyclass).

      Users can specify their own join operations, typically by overriding
      JoinRecordReader or MultiFilterRecordReader and mapping that class
      to an identifier in the join expression using the
      mapred.join.define.ident property, where ident is the identifier
      appearing in the join expression. Users may elect to emit- or modify- values
      passing through their join operation. Consulting the existing operations for
      guidance is recommended. Adding arguments is considerably more complex (and
      only partially supported), as one must also add a Node type to the parse
      tree. One is probably better off extending RecordReader in most cases.

      Design

      As alluded to above, the design defines inner (Composite) and leaf (Wrapped)
      types for the join tree. Delegation satisfies most requirements of the
      InputFormat contract, particularly validateInput and getSplits.
      Most of the work in this patch concerns getRecordReader. The
      CompositeInputFormat itself delegates to the parse tree generated by
      Parser.

      Hierarchical Joins

      Each RecordReader from the user must be "wrapped", since effecting a
      join requires the framework to track the head value from each source. Since
      the cross product of all values for each composite level of the join is
      emitted to its parent, all sources 1 must be capable of repeating the
      values for the current key. To avoid keeping an excessive number of copies
      (one per source per level), each composite requests its children to populate
      a JoinCollector with an iterator over its values. This way, there is
      only one copy of the current key for each composite node, the head key-value
      pair for each leaf, and storage at each leaf for all the values matching the
      current key at the parent collector (if it is currently participating in a
      join at the root). Strategies have been employed to avoid excessive copying
      when filling a user-provided Writable, but they have been conservative
      (e.g. in MultiFilterRecordReader, the value emitted is cloned in case
      the user modifies the value returned, possibly changing the state of a
      JoinCollector in the tree). For example, if the following sources
      contain these key streams:

      A: 0  0   1    1     2        ...
      B: 1  1   1    1     2        ...
      C: 1  6   21   107   ...
      D: 6  28  496  8128  33550336 ...
      

      Let A-D be wrapped sources and x,y be composite operations. If the
      expression is of the form x(A, y(B,C,D)), then when the current key at
      the root is 1 the tree may look like this:

                  x (1, [ I(A), [ I(y) ] ] )
                /   \
               W     y (1, [ I(B), I(C), EMPTY ])
               |   / | \
               |  W  W  W
               |  |  |  D (6, V~6~) => EMPTY
               |  |  C (6, V~6~)    => V~1.1~ @1.1
               |  B (2, V~2~)       => V~1,1~ V~1,2~ V~1,3~ V~1,4~ @1,3
               A (2, V~2~)          => V~1,1~ V~1,2~ @1,2
      

      A JoinCollector from x will have been created by requesting an
      iterator from A and another from y. The iterator at y is built by
      requesting iterators from B, C, and D. Since D doesn't contain the
      key 1, it returns an empty iterator. Since the value to return for a given
      join is a Writable provided by the user, the iterators returned are also
      responsible for writing the next value in that stream. For multilevel joins
      passing through a subclass of JoinRecordReader, the value produced will
      contain tuples within tuples; iterators for composites delegate to
      sub-iterators responsible for filling the value in the tuple at the position
      matching their position in the composite. In a sense, the only iterators
      that write to a tuple are the RecordReader s at the leaves. Note that
      this also implies that emitted tuples may not contain values from each
      source, but they will always have the same capacity.

      Writables

      Writable objects- including InputSplit s and TupleWritable s-
      encode themselves in the following format:

      <count><class1><class2>...<classn><obj1><obj2>...<objn>
      

      The inefficiency is regrettable- particularly since this overhead is
      incurred for every instance and most often the tuples emitted will be
      processed only within the map- but the encoding satisfies the Writable
      contract well enough to be emitted to the reducer, written to disk, etc. It
      is hoped that general compression will trim the most egregious waste. It
      should be noted that the framework does not actually write out a tuple (i.e.
      does not suffer for this deficiency) unless emitting one from
      MultiFilterRecordReader (a rare case in practice, it is hoped).

      Extensibility

      The join framework is modestly extensible. Practically, users seeking to add
      their own identifiers to join expressions are limited to extending
      JoinRecordReader and MultiFilterRecordReader. There is considerable
      latitude within these constraints, as illustrated in
      OverrideRecordReader, where values in child RecordReader s are
      skipped instead of incurring the overhead of building the iterator (that
      will inevitably be discarded).2 For most cases, the user need only
      implement the combine and/or emit methods in their subclass. It is expected
      that most will find that the three default operations will suffice.

      Adding arguments to expressions is more difficult. One would need to include
      a Node type for the parser, which requires some knowledge of its inner
      workings. The model in this area is crude and requires refinement before it
      can be "extensible" by a reasonable definition.

      Performance

      I have no numbers.

      Notes

      1. This isn't strictly true. The "leftmost" source will never need to repeat
      itself. Adding a pseudo-ResettableIterator to handle this case would be
      a welcome addition.

      2. Note that- even if reset- the override will only loop through the values
      in the rightmost key, instead of repeating that series a number of times
      equal to the cardinality of the cross product of the discarded streams
      (regrettably, looking at the code of OverrideRecordReader is more
      illustrative than this explanation).

      1. 2085.patch
        101 kB
        Chris Douglas
      2. 2085-2.patch
        101 kB
        Chris Douglas
      3. 2085-3.patch
        109 kB
        Chris Douglas
      4. 2085-4.patch
        108 kB
        Chris Douglas
      5. 2085-5.patch
        112 kB
        Chris Douglas

        Activity

        Hide
        Tsz Wo Nicholas Sze added a comment -

        By assuming certain properties of the input datasets, the join operation might be performed more efficiently. I think this is an interesting observation.

        I tried to read your patch but still cannot fully understand it. It would be great if you can give an example (like WordCount) to show how to use the new codes.

        Show
        Tsz Wo Nicholas Sze added a comment - By assuming certain properties of the input datasets, the join operation might be performed more efficiently. I think this is an interesting observation. I tried to read your patch but still cannot fully understand it. It would be great if you can give an example (like WordCount) to show how to use the new codes.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12368110/2085.patch
        against trunk revision r588341.

        @author +1. The patch does not contain any @author tags.

        javadoc -1. The javadoc tool appears to have generated messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 4 new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12368110/2085.patch against trunk revision r588341. @author +1. The patch does not contain any @author tags. javadoc -1. The javadoc tool appears to have generated messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 4 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1003/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        Fixed findbugs warnings, addressed javadoc, changed Token type to accommodate Nicholas's feedback.

        Show
        Chris Douglas added a comment - Fixed findbugs warnings, addressed javadoc, changed Token type to accommodate Nicholas's feedback.
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12368513/2085-2.patch
        against trunk revision r588778.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs -1. The patch appears to introduce 3 new Findbugs warnings.

        core tests -1. The patch failed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12368513/2085-2.patch against trunk revision r588778. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 3 new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1012/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        More findbugs, added an example

        Show
        Chris Douglas added a comment - More findbugs, added an example
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12368550/2085-3.patch
        against trunk revision r588778.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests -1. The patch failed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12368550/2085-3.patch against trunk revision r588778. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1019/console This message is automatically generated.
        Hide
        Owen O'Malley added a comment -

        The JavaDoc for TupleWritable's class description isn't right. (It wasn't updated.)

        The IOExceptions that are wrapping other errors should have descriptive string messages in them.

        You don't need to define a new ReflectionUtils.newInstance without the config, because if you pass in null, it won't use it.

        All of the instances the use cls.newInstance should be using ReflectionUtils.newInstance, since it does the constructor cache and handles the non-public class/constructor problems.

        The fieldnames m and n need descriptive names.

        I really don't like protected fields, especially when they are set/used multiple levels below where they are defined.

        Show
        Owen O'Malley added a comment - The JavaDoc for TupleWritable's class description isn't right. (It wasn't updated.) The IOExceptions that are wrapping other errors should have descriptive string messages in them. You don't need to define a new ReflectionUtils.newInstance without the config, because if you pass in null, it won't use it. All of the instances the use cls.newInstance should be using ReflectionUtils.newInstance, since it does the constructor cache and handles the non-public class/constructor problems. The fieldnames m and n need descriptive names. I really don't like protected fields, especially when they are set/used multiple levels below where they are defined.
        Hide
        Joydeep Sen Sarma added a comment -

        Chris - can you help me understand how the splits work? This might be useful for some of our apps and trying to understand what assumptions etc. are being made.

        we have sorted data files containing same sets of keys - but corresponding hdfs chunks of each file may not have the same set of keys. it wasnt clear to me from going through the patch how the merge-join is being parallelized. from node.getsplits - it seemed as if the ith split of the join record reader is composed of the ith split of each of the component files. but in this case - the join keys wouldn't line up ..

        also - given that the map task works on multiple hdfs files - where does it get scheduled?

        Show
        Joydeep Sen Sarma added a comment - Chris - can you help me understand how the splits work? This might be useful for some of our apps and trying to understand what assumptions etc. are being made. we have sorted data files containing same sets of keys - but corresponding hdfs chunks of each file may not have the same set of keys. it wasnt clear to me from going through the patch how the merge-join is being parallelized. from node.getsplits - it seemed as if the ith split of the join record reader is composed of the ith split of each of the component files. but in this case - the join keys wouldn't line up .. also - given that the map task works on multiple hdfs files - where does it get scheduled?
        Hide
        Chris Douglas added a comment -

        Joydeep-

        The assumption it makes is precisely as you describe it: the ith split from each source must contain the same keys. It does only the most rudimentary verification of this, IIRC verifying that it received an equal number of splits from each source. Generally, getting splits should be cheap, so it doesn't verify key ranges for any of the splits (and probably ought not to).

        I've asked around, and "the way" out of this onerous constraint involves using MapFiles. At a high level, you need an index for your input data so your splits can be informed. I'm not familiar with the details here, I'm afraid.

        CompositeInputSplit::getLocations() returns an unweighted union of hosts from its child splits. It would be preferable to weight a host that contains multiple splits for a given composite split, but for now it provides a flat list.

        Show
        Chris Douglas added a comment - Joydeep- The assumption it makes is precisely as you describe it: the ith split from each source must contain the same keys. It does only the most rudimentary verification of this, IIRC verifying that it received an equal number of splits from each source. Generally, getting splits should be cheap, so it doesn't verify key ranges for any of the splits (and probably ought not to). I've asked around, and "the way" out of this onerous constraint involves using MapFiles. At a high level, you need an index for your input data so your splits can be informed. I'm not familiar with the details here, I'm afraid. CompositeInputSplit::getLocations() returns an unweighted union of hosts from its child splits. It would be preferable to weight a host that contains multiple splits for a given composite split, but for now it provides a flat list.
        Hide
        Joydeep Sen Sarma added a comment -

        understood. i was thinking this might be using mapfiles or some kind of binary search to line up splits.

        Dumb question - our data is laid out as files (representing partitions) within a single directory - with a directory representing a pseudo-table. Is this compatible with where u are going? ie. - can i represent join one such directory against another - with (say) an inputformat that emits each file as a split (and making sure the order is the same)?

        The other case is that sometimes one dataset is partitioned (say) 16 way - but another is partitioned 32 way. This can happen when datasets are of unequal size (otherwise we end up creating too many files). In the above case, 2 files from the latter dataset have to be joined against each file from the former (assuming simple modulo arithmetic partitioning). would this be possible?

        Show
        Joydeep Sen Sarma added a comment - understood. i was thinking this might be using mapfiles or some kind of binary search to line up splits. Dumb question - our data is laid out as files (representing partitions) within a single directory - with a directory representing a pseudo-table. Is this compatible with where u are going? ie. - can i represent join one such directory against another - with (say) an inputformat that emits each file as a split (and making sure the order is the same)? The other case is that sometimes one dataset is partitioned (say) 16 way - but another is partitioned 32 way. This can happen when datasets are of unequal size (otherwise we end up creating too many files). In the above case, 2 files from the latter dataset have to be joined against each file from the former (assuming simple modulo arithmetic partitioning). would this be possible?
        Hide
        Chris Douglas added a comment -

        I don't know if this is helpful, but: as it exists now, the framework is incapable of finer granularity than an InputFormat, but neither will it object whatever you can fit into that framework.

        What you describe- directories as pseudo-tables with files as partitions- sounds like exactly what this is geared toward.

        As an example of a workaround/partial fit, consider your 16/32 way case. Whether it would be worthwhile/possible to express in the existing code will depend on a few factors: if the two files you're joining in the 32-way set are pairwise disjoint, then you can simply use an OverrideRecordReader with two custom InputFormats (each taking one "half" of the pair) to "join" them. However, if they're not disjoint, then you'll lose values. 1 Feeding the output of that into a join with your 16 way dataset might work, but it's a bit of a hack. You'd need to be certain of the partitions of both datasets to be confident in your results.

        Notes
        1. Really, you're looking for a different implementation of CompositeRecordReader::JoinCollector that emits values from each source in turn, rather than emitting the cross-product; this is being considered, but may not be in the immediate future. It's of limited use with the requirement that each source be sorted an partitioned in the same way, unfortunately. Most simply want to merge two sorted datasets without worrying about how they're partitioned (HADOOP-2120).

        Show
        Chris Douglas added a comment - I don't know if this is helpful, but: as it exists now, the framework is incapable of finer granularity than an InputFormat, but neither will it object whatever you can fit into that framework. What you describe- directories as pseudo-tables with files as partitions- sounds like exactly what this is geared toward. As an example of a workaround/partial fit, consider your 16/32 way case. Whether it would be worthwhile/possible to express in the existing code will depend on a few factors: if the two files you're joining in the 32-way set are pairwise disjoint, then you can simply use an OverrideRecordReader with two custom InputFormats (each taking one "half" of the pair) to "join" them. However, if they're not disjoint, then you'll lose values. 1 Feeding the output of that into a join with your 16 way dataset might work, but it's a bit of a hack. You'd need to be certain of the partitions of both datasets to be confident in your results. Notes 1. Really, you're looking for a different implementation of CompositeRecordReader::JoinCollector that emits values from each source in turn, rather than emitting the cross-product; this is being considered, but may not be in the immediate future. It's of limited use with the requirement that each source be sorted an partitioned in the same way, unfortunately. Most simply want to merge two sorted datasets without worrying about how they're partitioned ( HADOOP-2120 ).
        Hide
        Chris Douglas added a comment -

        Updated javadocs, made better use of ReflectionUtils, improved some variable names, made protected fields private or final (excluding the parser, which is temporary).

        Show
        Chris Douglas added a comment - Updated javadocs, made better use of ReflectionUtils, improved some variable names, made protected fields private or final (excluding the parser, which is temporary).
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12369206/2085-4.patch
        against trunk revision r592860.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12369206/2085-4.patch against trunk revision r592860. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1077/console This message is automatically generated.
        Hide
        Milind Bhandarkar added a comment -

        A few comments on the patch:

        If mapred.join.expr is not specified, CompositeInputFormat should throw a better exception,, rather than NPE.

        A simple benchmark that we can use to compare performance with with the reducer-side joins is desirable. But, it can be a separate jira.

        The motivation and design that is in this jira should be in package.html for o.a.h.mapred.join.

        Show
        Milind Bhandarkar added a comment - A few comments on the patch: If mapred.join.expr is not specified, CompositeInputFormat should throw a better exception,, rather than NPE. A simple benchmark that we can use to compare performance with with the reducer-side joins is desirable. But, it can be a separate jira. The motivation and design that is in this jira should be in package.html for o.a.h.mapred.join.
        Hide
        Chris Douglas added a comment -

        Addressed NPE and included interface info from this JIRA in o/a/h/mapred/join/package.html

        Show
        Chris Douglas added a comment - Addressed NPE and included interface info from this JIRA in o/a/h/mapred/join/package.html
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12369694/2085-5.patch
        against trunk revision r595563.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests -1. The patch failed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12369694/2085-5.patch against trunk revision r595563. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1110/console This message is automatically generated.
        Hide
        Chris Douglas added a comment -

        Trying hudson again

        Show
        Chris Douglas added a comment - Trying hudson again
        Hide
        Hadoop QA added a comment -

        +1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12369694/2085-5.patch
        against trunk revision r595563.

        @author +1. The patch does not contain any @author tags.

        javadoc +1. The javadoc tool did not generate any warning messages.

        javac +1. The applied patch does not generate any new compiler warnings.

        findbugs +1. The patch does not introduce any new Findbugs warnings.

        core tests +1. The patch passed core unit tests.

        contrib tests +1. The patch passed contrib unit tests.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/testReport/
        Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
        Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/artifact/trunk/build/test/checkstyle-errors.html
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12369694/2085-5.patch against trunk revision r595563. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1116/console This message is automatically generated.
        Hide
        Owen O'Malley added a comment -

        I just committed this. Thanks, Chris.

        Show
        Owen O'Malley added a comment - I just committed this. Thanks, Chris.
        Hide
        Hudson added a comment -
        Show
        Hudson added a comment - Integrated in Hadoop-Nightly #326 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/326/ )
        Hide
        Joydeep Sen Sarma added a comment -

        Hey folks - i was thinking of a easier way to do merge joins that removes the need for any assumptions about equipartitioning. it can also work with sorted and non-sorted data sets. It's also extremely simple to implement:

        • assume that there are two (or more) tables to be joined. Let's say Table A is sorted on join column and Table B is not.
        • we use Hadoop-2921 (or same technique implemented at higher layer) to force maps on A to be aligned by sort boundary. We also turn off sorting in map phase when maps work on table A. B is treated as usual.
        • Reducers do a merge join of A and B.

        If A and B are both sorted by the join column - then we are doing a pure merge-join in the reducer (maps will not sort).

        A and B don't need to be equipartitioned. It's of course not as efficient as the merge-join implemented here - but its also way more flexible .. (metadata about table sorting columns could be maintained outside and the map jobs configured based on whether sort and join columns match).

        Show
        Joydeep Sen Sarma added a comment - Hey folks - i was thinking of a easier way to do merge joins that removes the need for any assumptions about equipartitioning. it can also work with sorted and non-sorted data sets. It's also extremely simple to implement: assume that there are two (or more) tables to be joined. Let's say Table A is sorted on join column and Table B is not. we use Hadoop-2921 (or same technique implemented at higher layer) to force maps on A to be aligned by sort boundary. We also turn off sorting in map phase when maps work on table A. B is treated as usual. Reducers do a merge join of A and B. If A and B are both sorted by the join column - then we are doing a pure merge-join in the reducer (maps will not sort). A and B don't need to be equipartitioned. It's of course not as efficient as the merge-join implemented here - but its also way more flexible .. (metadata about table sorting columns could be maintained outside and the map jobs configured based on whether sort and join columns match).
        Hide
        Chris Douglas added a comment -

        Joydeep- excluding the optimization for not re-sorting A, it sounds like you're describing the join framework in contrib. The idea of metadata storing sorting columns, etc. is compelling, but a reasonable use of it would best be done by something like Pig, no? The most likely next step for more complex, reduce-side joins would be different map tasks for different datasets (e.g. emit result of operation w/ col 1,3 in B; identity for A, possibly in different formats sorted on whatever) followed by a join in the reduce. A sufficiently general execution engine- that could made decisions about whether or not the data is already sorted on some column, whether the join can happen on the map or reduce side, etc- belongs in framework code, I agree, but I'm less convinced it should live in this framework code.

        We could change the idea of a job to include map tasks across multiple datasets- similar, yet very much unlike the join work in this patch- followed by a reduce step. To take your example, starting two different maps over A and B st the partitions are congruent (i.e. K1 in A and K1 in B go to the same partition), essentially providing different map classes for different input paths. Of course, all good ideas have JIRAs: HADOOP-372

        Show
        Chris Douglas added a comment - Joydeep- excluding the optimization for not re-sorting A, it sounds like you're describing the join framework in contrib. The idea of metadata storing sorting columns, etc. is compelling, but a reasonable use of it would best be done by something like Pig, no? The most likely next step for more complex, reduce-side joins would be different map tasks for different datasets (e.g. emit result of operation w/ col 1,3 in B; identity for A, possibly in different formats sorted on whatever) followed by a join in the reduce. A sufficiently general execution engine- that could made decisions about whether or not the data is already sorted on some column, whether the join can happen on the map or reduce side, etc- belongs in framework code, I agree, but I'm less convinced it should live in this framework code. We could change the idea of a job to include map tasks across multiple datasets- similar, yet very much unlike the join work in this patch- followed by a reduce step. To take your example, starting two different maps over A and B st the partitions are congruent (i.e. K1 in A and K1 in B go to the same partition), essentially providing different map classes for different input paths. Of course, all good ideas have JIRAs: HADOOP-372

          People

          • Assignee:
            Chris Douglas
            Reporter:
            Chris Douglas
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development