Lucene - Core
  1. Lucene - Core
  2. LUCENE-532

[PATCH] Indexing on Hadoop distributed file system

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: 1.9
    • Fix Version/s: 4.0-ALPHA
    • Component/s: core/index
    • Labels:
      None

      Description

      In my current project we needed a way to create very large Lucene indexes on Hadoop distributed file system. When we tried to do it directly on DFS using Nutch FsDirectory class - we immediately found that indexing fails because DfsIndexOutput.seek() method throws UnsupportedOperationException. The reason for this behavior is clear - DFS does not support random updates and so seek() method can't be supported (at least not easily).

      Well, if we can't support random updates - the question is: do we really need them? Search in the Lucene code revealed 2 places which call IndexOutput.seek() method: one is in TermInfosWriter and another one in CompoundFileWriter. As we weren't planning to use CompoundFileWriter - the only place that concerned us was in TermInfosWriter.

      TermInfosWriter uses IndexOutput.seek() in its close() method to write total number of terms in the file back into the beginning of the file. It was very simple to change file format a little bit and write number of terms into last 8 bytes of the file instead of writing them into beginning of file. The only other place that should be fixed in order for this to work is in SegmentTermEnum constructor - to read this piece of information at position = file length - 8.

      With this format hack - we were able to use FsDirectory to write index directly to DFS without any problems. Well - we still don't index directly to DFS for performance reasons, but at least we can build small local indexes and merge them into the main index on DFS without copying big main index back and forth.

      1. cfs-patch.txt
        8 kB
        Kevin Oliver
      2. indexOnDFS.patch
        2 kB
        Igor Bolotin
      3. SegmentTermEnum.patch
        0.6 kB
        Igor Bolotin
      4. TermInfosWriter.patch
        0.9 kB
        Igor Bolotin

        Activity

        Hide
        Robert Muir added a comment -

        This is fixed by LUCENE-2373, just set your codec to AppendingCodec.

        Show
        Robert Muir added a comment - This is fixed by LUCENE-2373 , just set your codec to AppendingCodec.
        Hide
        Shai Erera added a comment -

        I see some progress in that direction was made under LUCENE-2373 but am not sure if this Codec is a generic one (i.e. can support any file we write today) or tailored for StandardTermDict. It'd be great if Lucene can support append-only FS !

        Show
        Shai Erera added a comment - I see some progress in that direction was made under LUCENE-2373 but am not sure if this Codec is a generic one (i.e. can support any file we write today) or tailored for StandardTermDict. It'd be great if Lucene can support append-only FS !
        Hide
        Ning Li added a comment -

        Is the use of seek and write in ChecksumIndexOutput making Lucene less likely to support all sequential write (i.e. no seek write)? ChecksumIndexOutput is currently used by SegmentInfos.

        Show
        Ning Li added a comment - Is the use of seek and write in ChecksumIndexOutput making Lucene less likely to support all sequential write (i.e. no seek write)? ChecksumIndexOutput is currently used by SegmentInfos.
        Hide
        Michael Busch added a comment -

        I think LUCENE-783 (move all file headers to segments file) would solve this issue nicely. Then there would not be the need to call seek() in CFSWriter and TermInfosWriter anymore. I'd love to work on 783, but not sure if time permits in the near future.

        Show
        Michael Busch added a comment - I think LUCENE-783 (move all file headers to segments file) would solve this issue nicely. Then there would not be the need to call seek() in CFSWriter and TermInfosWriter anymore. I'd love to work on 783, but not sure if time permits in the near future.
        Hide
        Grant Ingersoll added a comment -

        Anyone have a follow up on this? Seems like Hadoop based indexing would be a nice feature. It sounds like there was a lot of support for this, but it was never committed. Is this still an issue?

        Show
        Grant Ingersoll added a comment - Anyone have a follow up on this? Seems like Hadoop based indexing would be a nice feature. It sounds like there was a lot of support for this, but it was never committed. Is this still an issue?
        Hide
        Michael McCandless added a comment -

        Thank you for the patch & unit test!

        This is actually the same approach that I started with. But I ruled
        it out because I don't think it's safe to do arithmetic (ie, adding
        lengths to compute positions) on file positions.

        Meaning, one can imagine a Directory implementation that's doing some
        kind of compression where on writing N bytes the file position does
        not in fact advance by N bytes. Or maybe an implementation that must
        escape certain bytes, or it's writing to XML or using some kind of
        alternate coding system, or something along these lines. I don't know
        if such Directory implementations exist today, but, I don't want to
        break them if they do nor preclude them in the future.

        And so the only value you should ever pass to "seek()" is a value you
        previously obtained by calling "getFilePosition()". The current
        javadocs for these methods seem to imply this.

        However, on looking into this question further ... I do see that there
        are places now where Lucene already does arithmetic on file positions.
        For example in accessing a *.fdx file or *.tdx file we assume we can
        find a given entry at FORMAT_SIZE + 8 * index file position.

        Maybe it is OK to make the definition of Directory.seek() stricter, by
        requiring that in fact the position we pass to seek is always the same
        as "the number of bytes written", thereby allowing us to do arithmetic
        based on bytes/length and call seek with such values? I'm nervous
        about making this API change.

        I think this is the open question. Does anyone have any input to help
        answer this question?

        Lucene currently makes this assumption, albeit in a fairly contained
        way I think (most other calls to seek seem to be values previously
        obtained by getFilePosition()).

        Show
        Michael McCandless added a comment - Thank you for the patch & unit test! This is actually the same approach that I started with. But I ruled it out because I don't think it's safe to do arithmetic (ie, adding lengths to compute positions) on file positions. Meaning, one can imagine a Directory implementation that's doing some kind of compression where on writing N bytes the file position does not in fact advance by N bytes. Or maybe an implementation that must escape certain bytes, or it's writing to XML or using some kind of alternate coding system, or something along these lines. I don't know if such Directory implementations exist today, but, I don't want to break them if they do nor preclude them in the future. And so the only value you should ever pass to "seek()" is a value you previously obtained by calling "getFilePosition()". The current javadocs for these methods seem to imply this. However, on looking into this question further ... I do see that there are places now where Lucene already does arithmetic on file positions. For example in accessing a *.fdx file or *.tdx file we assume we can find a given entry at FORMAT_SIZE + 8 * index file position. Maybe it is OK to make the definition of Directory.seek() stricter, by requiring that in fact the position we pass to seek is always the same as "the number of bytes written", thereby allowing us to do arithmetic based on bytes/length and call seek with such values? I'm nervous about making this API change. I think this is the open question. Does anyone have any input to help answer this question? Lucene currently makes this assumption, albeit in a fairly contained way I think (most other calls to seek seem to be values previously obtained by getFilePosition()).
        Hide
        Kevin Oliver added a comment -

        Here are some diffs on how to remove seeks from CompoundFileWriter (this is against an older version of Lucene, 1.4.2 I think, but the general idea is the same). There's also a test too.

        Show
        Kevin Oliver added a comment - Here are some diffs on how to remove seeks from CompoundFileWriter (this is against an older version of Lucene, 1.4.2 I think, but the general idea is the same). There's also a test too.
        Hide
        Michael McCandless added a comment -

        Alas, in trying to change the CFS format so that file offsets are stored at the end of the file, when implementing the corresponding changes to CompoundFileReader, I discovered that this approach isn't viable. I had been thinking the reader would look at the file length, subtract numEntry*sizeof(long), seek to there, and then read the offsets (longs). The problem is: we can't know sizeof(long) since this is dependent on the actual storage implementation, ie, for the same reasoning above. Ie we can't assume a byte = 1 file position, always.

        So, then, the only solution I can think of (to avoid seek during write) would be to write to a separate file, for each *.cfs file, that contains the file offsets corresponding to the cfs file. Eg, if we have _1.cfs we would also have _1.cfsx which holds the file offsets. This is sort of costly if we care about # files (it doubles the number of files in the simple case of a bunch of segments w/ no deletes/separate norms).

        Yonik had actually mentioned in LUCENE-704 that fixing CFS writing to not use seek was not very important, ie, it would be OK to not use compound files with HDFS as the store.

        Does anyone see a better approach?

        Show
        Michael McCandless added a comment - Alas, in trying to change the CFS format so that file offsets are stored at the end of the file, when implementing the corresponding changes to CompoundFileReader, I discovered that this approach isn't viable. I had been thinking the reader would look at the file length, subtract numEntry*sizeof(long), seek to there, and then read the offsets (longs). The problem is: we can't know sizeof(long) since this is dependent on the actual storage implementation, ie, for the same reasoning above. Ie we can't assume a byte = 1 file position, always. So, then, the only solution I can think of (to avoid seek during write) would be to write to a separate file, for each *.cfs file, that contains the file offsets corresponding to the cfs file. Eg, if we have _1.cfs we would also have _1.cfsx which holds the file offsets. This is sort of costly if we care about # files (it doubles the number of files in the simple case of a bunch of segments w/ no deletes/separate norms). Yonik had actually mentioned in LUCENE-704 that fixing CFS writing to not use seek was not very important, ie, it would be OK to not use compound files with HDFS as the store. Does anyone see a better approach?
        Hide
        Andrzej Bialecki added a comment -

        Hadoop cannot (yet) change file position when writing. All files are write-once, i.e. once they are closed they are pretty much immutable. They are also append-only - writing uses a subclass of OutputStream.

        Show
        Andrzej Bialecki added a comment - Hadoop cannot (yet) change file position when writing. All files are write-once, i.e. once they are closed they are pretty much immutable. They are also append-only - writing uses a subclass of OutputStream.
        Hide
        Michael McCandless added a comment -

        Also: I like the idea of never doing "seek" when writing. The less functionality we rely on from the filesystem, the more portable Lucene will be. Since Lucene is so wonderfully simple, never using "seek" during write is in fact very feasible.

        I think to do this we need to change the CFS file format, so that the offsets are stored at the end of the file. We actually can't pre-compute where the offsets will be because we can't make assumptions about how the file position changes when bytes are written: this is implementation specific. For example, if the Directory implementation does on-the-fly compression, then the file position will not be the number of bytes written. So I think we have to write at the end of the file.

        Any opinions or other suggestions?

        Show
        Michael McCandless added a comment - Also: I like the idea of never doing "seek" when writing. The less functionality we rely on from the filesystem, the more portable Lucene will be. Since Lucene is so wonderfully simple, never using "seek" during write is in fact very feasible. I think to do this we need to change the CFS file format, so that the offsets are stored at the end of the file. We actually can't pre-compute where the offsets will be because we can't make assumptions about how the file position changes when bytes are written: this is implementation specific. For example, if the Directory implementation does on-the-fly compression, then the file position will not be the number of bytes written. So I think we have to write at the end of the file. Any opinions or other suggestions?
        Hide
        Michael McCandless added a comment -

        Sorry, I meant "dup of LUCENE-704 " above.

        Show
        Michael McCandless added a comment - Sorry, I meant "dup of LUCENE-704 " above.
        Hide
        Michael McCandless added a comment -

        I think this is the same issue as LUCENE-532 (I just marked that one as a dup).

        But there was one difference: does HDFS allow writing to the same file (eg "segments") more than once? I thought it did not because it's "write once"? Do we need to not do that (write to the same file more than once) to work with HDFS (lock-less gets us closer)?

        Show
        Michael McCandless added a comment - I think this is the same issue as LUCENE-532 (I just marked that one as a dup). But there was one difference: does HDFS allow writing to the same file (eg "segments") more than once? I thought it did not because it's "write once"? Do we need to not do that (write to the same file more than once) to work with HDFS (lock-less gets us closer)?
        Hide
        Otis Gospodnetic added a comment -

        I'm hesitant to commit without the CFS support. It looks like more and more people are using CFS indexes.

        Show
        Otis Gospodnetic added a comment - I'm hesitant to commit without the CFS support. It looks like more and more people are using CFS indexes.
        Hide
        Chris added a comment -

        Don't mean to resurrect old issues, but we're having the same problem here indexing to DFS and I've applied the patch and it works for us. Wondering if I'm missing something, or if this is being addressed somewhere else in trunk that I haven't found.

        Show
        Chris added a comment - Don't mean to resurrect old issues, but we're having the same problem here indexing to DFS and I've applied the patch and it works for us. Wondering if I'm missing something, or if this is being addressed somewhere else in trunk that I haven't found.
        Hide
        Otis Gospodnetic added a comment -

        This actually looks like a good and patch that doesn't break any tests. I'll commit it in the coming days, as it looks like it should be backwards compatible... except CFS won't be supported unless somebody patches that, too (I tried quickly and soon got unit tests to fail ).

        Show
        Otis Gospodnetic added a comment - This actually looks like a good and patch that doesn't break any tests. I'll commit it in the coming days, as it looks like it should be backwards compatible... except CFS won't be supported unless somebody patches that, too (I tried quickly and soon got unit tests to fail ).
        Hide
        Igor Bolotin added a comment -

        Attached is new patch which is using format number to determine where to read the size as discussed.
        Thanks!

        Show
        Igor Bolotin added a comment - Attached is new patch which is using format number to determine where to read the size as discussed. Thanks!
        Hide
        Doug Cutting added a comment -

        Instead of changing the value to -1 we should not write a size value in the header at all. We can change the format number and use that to determine where to read the size. Does that make sense?

        Also, please submit patches as a single 'svn diff' from the top of the lucene tree.

        Thanks!

        Show
        Doug Cutting added a comment - Instead of changing the value to -1 we should not write a size value in the header at all. We can change the format number and use that to determine where to read the size. Does that make sense? Also, please submit patches as a single 'svn diff' from the top of the lucene tree. Thanks!
        Hide
        Igor Bolotin added a comment -

        Two patch files are attached

        Show
        Igor Bolotin added a comment - Two patch files are attached

          People

          • Assignee:
            Unassigned
            Reporter:
            Igor Bolotin
          • Votes:
            3 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development