Lucene - Core
  1. Lucene - Core
  2. LUCENE-2373

Create a Codec to work with streaming and append-only filesystems

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 4.0-ALPHA
    • Component/s: core/index
    • Labels:
      None
    • Lucene Fields:
      New

      Description

      Since early 2.x times Lucene used a skip/seek/write trick to patch the length of the terms dict into a place near the start of the output data file. This however made it impossible to use Lucene with append-only filesystems such as HDFS.

      In the post-flex trunk the following code in StandardTermsDictWriter initiates this:

          // Count indexed fields up front
          CodecUtil.writeHeader(out, CODEC_NAME, VERSION_CURRENT); 
      
          out.writeLong(0);                             // leave space for end index pointer
      

      and completes this in close():

            out.seek(CodecUtil.headerLength(CODEC_NAME));
            out.writeLong(dirStart);
      

      I propose to change this layout so that this pointer is stored simply at the end of the file. It's always 8 bytes long, and we known the final length of the file from Directory, so it's a single additional seek(length - 8) to read it, which is not much considering the benefits.

      1. appending.patch
        51 kB
        Andrzej Bialecki
      2. appending.patch
        48 kB
        Andrzej Bialecki
      3. LUCENE-2372-2.patch
        55 kB
        Andrzej Bialecki
      4. LUCENE-2373.patch
        53 kB
        Michael McCandless

        Issue Links

          Activity

          Hide
          Andrzej Bialecki added a comment -

          Just noticed that the same problem exists in SimpleStandardTermsIndexWriter, and I propose the same solution there.

          Show
          Andrzej Bialecki added a comment - Just noticed that the same problem exists in SimpleStandardTermsIndexWriter, and I propose the same solution there.
          Hide
          Earwin Burrfoot added a comment -

          And then IndexOutput.seek() can be deleted. Cool.

          Show
          Earwin Burrfoot added a comment - And then IndexOutput.seek() can be deleted. Cool.
          Hide
          Michael McCandless added a comment -

          I would love to make Lucene truly write once (and moreve IndexOutput.seek), but... this approach makes me a little nervous...

          In some environments, relying on the length of the file to be accurate might be risky: it's metadata, that can be subject to different client-side caching than the file's contents. EG on NFS I've seen issues where the file length was stale yet the file contents were not.

          Maybe we could offer a separate codec that takes this approach, for use on filesystems like HDFS that can't seek during write? We should refactor standard codec so that "where this long gets stored" can be easily overridden by a subclass.

          Or, alternatively, we could write this "index of the index" to a separate file?

          Show
          Michael McCandless added a comment - I would love to make Lucene truly write once (and moreve IndexOutput.seek), but... this approach makes me a little nervous... In some environments, relying on the length of the file to be accurate might be risky: it's metadata, that can be subject to different client-side caching than the file's contents. EG on NFS I've seen issues where the file length was stale yet the file contents were not. Maybe we could offer a separate codec that takes this approach, for use on filesystems like HDFS that can't seek during write? We should refactor standard codec so that "where this long gets stored" can be easily overridden by a subclass. Or, alternatively, we could write this "index of the index" to a separate file?
          Hide
          Shai Erera added a comment -

          I'd rather not count on file length as well ... so a put/getTermDictSize method on Codec will allow one to implement it however one wants, if running on HDFS for example?

          Show
          Shai Erera added a comment - I'd rather not count on file length as well ... so a put/getTermDictSize method on Codec will allow one to implement it however one wants, if running on HDFS for example?
          Hide
          Lance Norskog added a comment -

          Does this make it possible to add a good checksum?

          The Cloud and NRT architectures involve copying lots of segment files around, and disk&RAM&network bandwidth all have error rates. It would be great if the process of making an index file included, on the fly, the creation of a solid checksum that is then baked into the file at the last moment. It should also be in the segments.gen file, but it is more important that the file should have the checksum embedded such that walking the whole file gives a fixed value.

          Show
          Lance Norskog added a comment - Does this make it possible to add a good checksum? The Cloud and NRT architectures involve copying lots of segment files around, and disk&RAM&network bandwidth all have error rates. It would be great if the process of making an index file included, on the fly, the creation of a solid checksum that is then baked into the file at the last moment. It should also be in the segments.gen file, but it is more important that the file should have the checksum embedded such that walking the whole file gives a fixed value.
          Hide
          Andrzej Bialecki added a comment -

          Aggregated comments...

          Mike: I'd hate to add yet another file just for this purpose. Long-term it's perhaps worth it. Short-term for HDFS use case it would be enough to provide a method to write a header and a trailer. Codecs that can seek/overwrite would just use the header, codecs that can't would use both. Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead).

          Shai: hm, but this would require a separate file that stores the header, right?

          Lance: yes. The original use case I had in mind was HDFS (Hadoop File System) which already implements on-the-fly checksums. If we go the way that Mike suggested, i.e. implementing a separate codec, then this should be a simple addition. We could also perhaps structure this as a codec wrapper so that this capability can be applied to other codecs too.

          Show
          Andrzej Bialecki added a comment - Aggregated comments... Mike: I'd hate to add yet another file just for this purpose. Long-term it's perhaps worth it. Short-term for HDFS use case it would be enough to provide a method to write a header and a trailer. Codecs that can seek/overwrite would just use the header, codecs that can't would use both. Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead). Shai: hm, but this would require a separate file that stores the header, right? Lance: yes. The original use case I had in mind was HDFS (Hadoop File System) which already implements on-the-fly checksums. If we go the way that Mike suggested, i.e. implementing a separate codec, then this should be a simple addition. We could also perhaps structure this as a codec wrapper so that this capability can be applied to other codecs too.
          Hide
          Michael McCandless added a comment -

          Mike: I'd hate to add yet another file just for this purpose. Long-term it's perhaps worth it. Short-term for HDFS use case it would be enough to provide a method to write a header and a trailer. Codecs that can seek/overwrite would just use the header, codecs that can't would use both.

          I think that's a good plan – abstract the header write/read methods so that another codec can easily subclass to change how/where these are written. I think Lucene's default (standard) codec should continue to do what it does now? And then HDFS can take the standard codec, and subclass StandardTermsDictWriter/Reader to put the header at the end.

          Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead).

          Can't we just use the current standard codec's approach by default? Back-tracking seems dangerous. Eg what if .fileLength() is too small on such filesystems?

          Does this make it possible to add a good checksum?

          A codec could easily do this, today – it's orthogonal to using HDFS. EG Lucene already has a ChecksumIndexOutput/Input, so this should be a simple cutover in standard codec (though we would need to fix up the classes, eg to make "get me the IndexOutput/Input" method, so a subclass could override).

          Show
          Michael McCandless added a comment - Mike: I'd hate to add yet another file just for this purpose. Long-term it's perhaps worth it. Short-term for HDFS use case it would be enough to provide a method to write a header and a trailer. Codecs that can seek/overwrite would just use the header, codecs that can't would use both. I think that's a good plan – abstract the header write/read methods so that another codec can easily subclass to change how/where these are written. I think Lucene's default (standard) codec should continue to do what it does now? And then HDFS can take the standard codec, and subclass StandardTermsDictWriter/Reader to put the header at the end. Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead). Can't we just use the current standard codec's approach by default? Back-tracking seems dangerous. Eg what if .fileLength() is too small on such filesystems? Does this make it possible to add a good checksum? A codec could easily do this, today – it's orthogonal to using HDFS. EG Lucene already has a ChecksumIndexOutput/Input, so this should be a simple cutover in standard codec (though we would need to fix up the classes, eg to make "get me the IndexOutput/Input" method, so a subclass could override).
          Hide
          Andrzej Bialecki added a comment -

          I think that's a good plan - abstract the header write/read methods so that another codec can easily subclass to change how/where these are written. I think Lucene's default (standard) codec should continue to do what it does now? And then HDFS can take the standard codec, and subclass StandardTermsDictWriter/Reader to put the header at the end.

          Assuming we add writeHeader/writeTrailer methods, the standard codec would write the header as it does today using writeHeader(), and in writeTrailer() it would just patch it the same way it does today.

          Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead).

          Can't we just use the current standard codec's approach by default? Back-tracking seems dangerous. Eg what if .fileLength() is too small on such filesystems?

          Yes, of course, I was just dreaming up a filesystem that is both append-only and with unreliable fileLength ... not that I know of any off-hand

          Show
          Andrzej Bialecki added a comment - I think that's a good plan - abstract the header write/read methods so that another codec can easily subclass to change how/where these are written. I think Lucene's default (standard) codec should continue to do what it does now? And then HDFS can take the standard codec, and subclass StandardTermsDictWriter/Reader to put the header at the end. Assuming we add writeHeader/writeTrailer methods, the standard codec would write the header as it does today using writeHeader(), and in writeTrailer() it would just patch it the same way it does today. Codecs that operate on filesystems with unreliable fileLength could write a sync marker before the trailer, and there could be a back-tracking mechanism that starts from the reported fileLength and then tries to find the sync marker (reading back, and/or ahead). Can't we just use the current standard codec's approach by default? Back-tracking seems dangerous. Eg what if .fileLength() is too small on such filesystems? Yes, of course, I was just dreaming up a filesystem that is both append-only and with unreliable fileLength ... not that I know of any off-hand
          Hide
          Lance Norskog added a comment -

          Lance: yes. The original use case I had in mind was HDFS (Hadoop File System) which already implements on-the-fly checksums. If we go the way that Mike suggested, i.e. implementing a separate codec, then this should be a simple addition. We could also perhaps structure this as a codec wrapper so that this capability can be applied to other codecs too.

          +1 for in Lucene itself. Lots of large installations don't use HDFS to move shards around. Also, the HDFS checksum only counts after the file has touched down at the HDFS portal: there are error rates in local RAM, local hard disk, shared file systems and network I/O. Doing the checksum at the origin is more useful.

          Show
          Lance Norskog added a comment - Lance: yes. The original use case I had in mind was HDFS (Hadoop File System) which already implements on-the-fly checksums. If we go the way that Mike suggested, i.e. implementing a separate codec, then this should be a simple addition. We could also perhaps structure this as a codec wrapper so that this capability can be applied to other codecs too. +1 for in Lucene itself. Lots of large installations don't use HDFS to move shards around. Also, the HDFS checksum only counts after the file has touched down at the HDFS portal: there are error rates in local RAM, local hard disk, shared file systems and network I/O. Doing the checksum at the origin is more useful.
          Hide
          Lance Norskog added a comment -

          Grid filesystems like larger blocksizes. HDFS uses a default blocksize of 128k right? At this size, is it worth doing a few merges/optimizes to make a segment fit? This pushes the problem of grid filesystems away from low-level indexing. I would want to index locally and push the index through a separate grid FS access manager.

          Show
          Lance Norskog added a comment - Grid filesystems like larger blocksizes. HDFS uses a default blocksize of 128k right? At this size, is it worth doing a few merges/optimizes to make a segment fit? This pushes the problem of grid filesystems away from low-level indexing. I would want to index locally and push the index through a separate grid FS access manager.
          Hide
          Andrzej Bialecki added a comment -

          HDFS uses 64 or 128 _Mega_Byte blocks.

          Show
          Andrzej Bialecki added a comment - HDFS uses 64 or 128 _Mega_Byte blocks.
          Hide
          Lance Norskog added a comment -

          Another reason to create files in a fully sequential mode is that SSD drives do not like random writes - they can get very slow. SSDs function well with sequential writes, sequential reads, and random reads, so if this issues is fixed, they should work well with Lucene.

          Show
          Lance Norskog added a comment - Another reason to create files in a fully sequential mode is that SSD drives do not like random writes - they can get very slow. SSDs function well with sequential writes, sequential reads, and random reads, so if this issues is fixed, they should work well with Lucene.
          Hide
          Lance Norskog added a comment -

          .bq HDFS uses 64 or 128 _Mega_Byte blocks.
          Yet another reason to manage memory carefully.

          It should be possible to hit this watermark by using the NoMergePolicy and a RamBuffer size of 64M or 128M:. Hitting the RAMBuffer size causes a segment to flush to a file with little breakage (unused disk space), and it will never be merged again, cutting HDFS overheads. This should give a predictable and consistent segment writing overhead, right?

          Show
          Lance Norskog added a comment - .bq HDFS uses 64 or 128 _Mega_Byte blocks. Yet another reason to manage memory carefully. It should be possible to hit this watermark by using the NoMergePolicy and a RamBuffer size of 64M or 128M:. Hitting the RAMBuffer size causes a segment to flush to a file with little breakage (unused disk space), and it will never be merged again, cutting HDFS overheads. This should give a predictable and consistent segment writing overhead, right?
          Hide
          Andrzej Bialecki added a comment -

          Good point, Lance - though for larger indexes the number of blocks (hence the number of sub-readers in your scenario) would be substantial, maybe too high. Hadoop doesn't do much of local caching of remote blocks, but I implemented a HDFS Directory in Luke that uses ehcache, and it works quite well.

          Show
          Andrzej Bialecki added a comment - Good point, Lance - though for larger indexes the number of blocks (hence the number of sub-readers in your scenario) would be substantial, maybe too high. Hadoop doesn't do much of local caching of remote blocks, but I implemented a HDFS Directory in Luke that uses ehcache, and it works quite well.
          Hide
          Andrzej Bialecki added a comment -

          This patch contains an implementation of AppendingCodec and necessary refactorings in CodecProvider and SegmentInfos to support append-only filesystems. There is a unit test that illustrates the use of the codec and verifies that it works with append-only FS.

          Note 1: SegmentInfos write/read methods used the seek/rewrite trick to update the checksum, so it was necessary to extend CodecProvider with methods to provide custom implementations of SegmentInfosWriter/Reader (and default implementations thereof).

          Note 2: o.a.l.index.codecs.* doesn't have access to many package-level APIs from o.a.l.index.*, so I had to relax the visibility of some methods and fields. Perhaps this may be tightened back in a later revision...

          Patch is relative to the latest trunk (rev. 958137).

          Show
          Andrzej Bialecki added a comment - This patch contains an implementation of AppendingCodec and necessary refactorings in CodecProvider and SegmentInfos to support append-only filesystems. There is a unit test that illustrates the use of the codec and verifies that it works with append-only FS. Note 1: SegmentInfos write/read methods used the seek/rewrite trick to update the checksum, so it was necessary to extend CodecProvider with methods to provide custom implementations of SegmentInfosWriter/Reader (and default implementations thereof). Note 2: o.a.l.index.codecs.* doesn't have access to many package-level APIs from o.a.l.index.*, so I had to relax the visibility of some methods and fields. Perhaps this may be tightened back in a later revision... Patch is relative to the latest trunk (rev. 958137).
          Hide
          Robert Muir added a comment -

          Note 2: o.a.l.index.codecs.* doesn't have access to many package-level APIs from o.a.l.index.*, so I had to relax the visibility of some methods and fields. Perhaps this may be tightened back in a later revision...

          Hi, I wouldn't worry about this. In general Mike had this problem when moving things to the codec package, so we added a javadocs tag for consistent labeling: @lucene.internal

          This expands to the following text: NOTE: This API is for Lucene internal purposes only and might change in incompatible ways in the next release.

          Example usage: http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/src/java/org/apache/lucene/index/IndexFileNames.java

          Additionally we added another tag: @lucene.experimental, which you can use for any new APIs you introduce that might not have the final stable API (most codecs use this already I think).

          This expands to the following text: WARNING: This API is experimental and might change in incompatible ways in the next release.

          Example usage:
          http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/src/java/org/apache/lucene/index/codecs/pulsing/PulsingCodec.java

          Show
          Robert Muir added a comment - Note 2: o.a.l.index.codecs.* doesn't have access to many package-level APIs from o.a.l.index.*, so I had to relax the visibility of some methods and fields. Perhaps this may be tightened back in a later revision... Hi, I wouldn't worry about this. In general Mike had this problem when moving things to the codec package, so we added a javadocs tag for consistent labeling: @lucene.internal This expands to the following text: NOTE: This API is for Lucene internal purposes only and might change in incompatible ways in the next release. Example usage: http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/src/java/org/apache/lucene/index/IndexFileNames.java Additionally we added another tag: @lucene.experimental, which you can use for any new APIs you introduce that might not have the final stable API (most codecs use this already I think). This expands to the following text: WARNING: This API is experimental and might change in incompatible ways in the next release. Example usage: http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/src/java/org/apache/lucene/index/codecs/pulsing/PulsingCodec.java
          Hide
          Andrzej Bialecki added a comment -

          Yup, I used @lucene.experimental in this patch.

          Show
          Andrzej Bialecki added a comment - Yup, I used @lucene.experimental in this patch.
          Hide
          Robert Muir added a comment -

          Cool, I think @lucene.internal would be good for SegmentInfo etc that must become public.

          Show
          Robert Muir added a comment - Cool, I think @lucene.internal would be good for SegmentInfo etc that must become public.
          Hide
          Andrzej Bialecki added a comment -

          I would appreciate a review and a go/no-go from other committers. Especially regarding the part that changes CodecProvider API by adding SegmentInfoWriter/Reader.

          Show
          Andrzej Bialecki added a comment - I would appreciate a review and a go/no-go from other committers. Especially regarding the part that changes CodecProvider API by adding SegmentInfoWriter/Reader.
          Hide
          Michael McCandless added a comment -

          This looks great Andrzej! This gives codecs full control over reading/writing of SegmentInfo/s, which now empowers a Codec to store any per-segment info it needs to (eg, hasProx, which is now a hardwired bit in SegmentInfo, is really a codec level detail). Probably the codec could return a (private to it) subclass of SegmentInfo to hold such extra info...

          Maybe we should provide default impls for CodecProvider.getSegmentInfosReader/Writer? (Ie returning the Default impls)

          Also, should we factor out the "leave space for index pointer" (out.writeLong(0)) to the subclass? (And, the reading of that dirOffset). Because this is wasted now for the appending codec...

          Show
          Michael McCandless added a comment - This looks great Andrzej! This gives codecs full control over reading/writing of SegmentInfo/s, which now empowers a Codec to store any per-segment info it needs to (eg, hasProx, which is now a hardwired bit in SegmentInfo, is really a codec level detail). Probably the codec could return a (private to it) subclass of SegmentInfo to hold such extra info... Maybe we should provide default impls for CodecProvider.getSegmentInfosReader/Writer? (Ie returning the Default impls) Also, should we factor out the "leave space for index pointer" (out.writeLong(0)) to the subclass? (And, the reading of that dirOffset). Because this is wasted now for the appending codec...
          Hide
          Andrzej Bialecki added a comment -

          Probably the codec could return a (private to it) subclass of SegmentInfo to hold such extra info...

          Nice idea, I didn't think about this - yes, this should be possible now.

          Maybe we should provide default impls for CodecProvider.getSegmentInfosReader/Writer? (Ie returning the Default impls)

          DefaultCodecProvider does exactly this. Or do you mean instead of using abstract methods in CodecProvider?

          Also, should we factor out the "leave space for index pointer" (out.writeLong(0)) to the subclass? (And, the reading of that dirOffset). Because this is wasted now for the appending codec...

          The reading is already factored out, but the writing ... Well, it's just 8 bytes per segment ... the reason I didn't factor it out is that it would require additional before/after delegation, or a replication of larger sections of code...

          Show
          Andrzej Bialecki added a comment - Probably the codec could return a (private to it) subclass of SegmentInfo to hold such extra info... Nice idea, I didn't think about this - yes, this should be possible now. Maybe we should provide default impls for CodecProvider.getSegmentInfosReader/Writer? (Ie returning the Default impls) DefaultCodecProvider does exactly this. Or do you mean instead of using abstract methods in CodecProvider? Also, should we factor out the "leave space for index pointer" (out.writeLong(0)) to the subclass? (And, the reading of that dirOffset). Because this is wasted now for the appending codec... The reading is already factored out, but the writing ... Well, it's just 8 bytes per segment ... the reason I didn't factor it out is that it would require additional before/after delegation, or a replication of larger sections of code...
          Hide
          Michael McCandless added a comment -

          DefaultCodecProvider does exactly this. Or do you mean instead of using abstract methods in CodecProvider?

          Right, I meant move the default impls into CodecProvider, so an app with a custom CodecProvider need not implement the defaults.

          The reading is already factored out, but the writing ... Well, it's just 8 bytes per segment ... the reason I didn't factor it out is that it would require additional before/after delegation, or a replication of larger sections of code...

          I hear you, but it looks sort of hackish to factor out one part (seeking to the dir) but not the other part (writing/reading the dirOffset); but I'm fine w/ committing it like this. Maybe just add a comment in AppendingTermsDictReader.seekDir that dirOffset, which the writer had written into header of file, is ignored?

          Show
          Michael McCandless added a comment - DefaultCodecProvider does exactly this. Or do you mean instead of using abstract methods in CodecProvider? Right, I meant move the default impls into CodecProvider, so an app with a custom CodecProvider need not implement the defaults. The reading is already factored out, but the writing ... Well, it's just 8 bytes per segment ... the reason I didn't factor it out is that it would require additional before/after delegation, or a replication of larger sections of code... I hear you, but it looks sort of hackish to factor out one part (seeking to the dir) but not the other part (writing/reading the dirOffset); but I'm fine w/ committing it like this. Maybe just add a comment in AppendingTermsDictReader.seekDir that dirOffset, which the writer had written into header of file, is ignored?
          Hide
          Andrzej Bialecki added a comment -

          I hear you, but it looks sort of hackish to factor out one part (seeking to the dir) but not the other part (writing/reading the dirOffset); but I'm fine w/ committing it like this. Maybe just add a comment in AppendingTermsDictReader.seekDir that dirOffset, which the writer had written into header of file, is ignored?

          I hear you too I'll try to factor out the whole section, if this becomes too messy then I'll add a comment. Re: CodecProvider default impls - ok.

          Show
          Andrzej Bialecki added a comment - I hear you, but it looks sort of hackish to factor out one part (seeking to the dir) but not the other part (writing/reading the dirOffset); but I'm fine w/ committing it like this. Maybe just add a comment in AppendingTermsDictReader.seekDir that dirOffset, which the writer had written into header of file, is ignored? I hear you too I'll try to factor out the whole section, if this becomes too messy then I'll add a comment. Re: CodecProvider default impls - ok.
          Hide
          Andrzej Bialecki added a comment -

          It wasn't too messy after all - here's an updated patch that incorporates your suggestions.

          Show
          Andrzej Bialecki added a comment - It wasn't too messy after all - here's an updated patch that incorporates your suggestions.
          Hide
          Michael McCandless added a comment -

          Patch looks great! Thanks Andrzej.

          I tweaked a few things – added some missing copyrights, removed some unnecessary imports, etc. I also strengthened the test a bit, by having it write 2 segments and then optimize them, which hit an exception because seek was called when building the compound file doc store (cfx) file. So I fixed test to also disable that compound-file, and added explanation of this in AppendingCodec's jdocs.

          We still need a CHANGES entry, but... should this go into contrib/misc instead of core? Few people need to use appending codec?

          Show
          Michael McCandless added a comment - Patch looks great! Thanks Andrzej. I tweaked a few things – added some missing copyrights, removed some unnecessary imports, etc. I also strengthened the test a bit, by having it write 2 segments and then optimize them, which hit an exception because seek was called when building the compound file doc store (cfx) file. So I fixed test to also disable that compound-file, and added explanation of this in AppendingCodec's jdocs. We still need a CHANGES entry, but... should this go into contrib/misc instead of core? Few people need to use appending codec?
          Hide
          Andrzej Bialecki added a comment -

          contrib/misc is fine with me. I'll update the patch to include contrib/CHANGES.txt and move the content to contrib/misc.

          Show
          Andrzej Bialecki added a comment - contrib/misc is fine with me. I'll update the patch to include contrib/CHANGES.txt and move the content to contrib/misc.
          Hide
          Andrzej Bialecki added a comment -

          Updated patch. I added comments both in top-level CHANGES and in contrib/CHANGES, to account for two new areas of functionality - the customizable SegmentInfosWriter and the appending codec. If there are no objections I'd like to commit it.

          Show
          Andrzej Bialecki added a comment - Updated patch. I added comments both in top-level CHANGES and in contrib/CHANGES, to account for two new areas of functionality - the customizable SegmentInfosWriter and the appending codec. If there are no objections I'd like to commit it.
          Hide
          Michael McCandless added a comment -

          Looks great Andrzej! +1 to commit.

          Show
          Michael McCandless added a comment - Looks great Andrzej! +1 to commit.
          Hide
          Andrzej Bialecki added a comment -

          Committed to trunk in revision 962694. Thank you all for helping and reviewing this issue!

          Show
          Andrzej Bialecki added a comment - Committed to trunk in revision 962694. Thank you all for helping and reviewing this issue!

            People

            • Assignee:
              Unassigned
              Reporter:
              Andrzej Bialecki
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development