Hadoop Common
  1. Hadoop Common
  2. HADOOP-1377

Creation time and modification time for hadoop files and directories

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.14.0
    • Component/s: None
    • Labels:
      None

      Description

      This issue will document the requirements, design and implementation of creation times and modification times of hadoop files and directories.

      My proposal is to have support two additional attributes for each file and directory in HDFS. The "creation time" is the time when the file/directory was created. It is a 8 byte integer stored in each FSDirectory.INode. The "modification time" is the time when the last modification occured to the file/directory. It is an 8 byte integer stored in the FSDirectory.INode. These two fields are stored in in the FSEdits and FSImage as part of the transaction that created the file/directory.

      My current proposal is to not support "access time" for a file/directory. It is costly to implement and current applications might not need it.

      In the current implementation, the "modification time" for a file will be same as its creation time because HDFS files are currently unmodifiable. Setting file attributes (e.g. setting the replication factor) of a file does not modify the "modification time" of that file. The "modification time" for a directory is either its creation time or the time when the most recent file-delete or file-create occured in that directory.

      A new command named "hadoop dfs -lsl" will display the creation time and modification time of the files/directories that it lists. The output of the existing command "hadoop dfs -ls" will not be affected.

      The ClientProtocol will change because DFSFileInfo will have two additional fields: the creation time and modification time of the file that it represents. This information can be retrieved by clients thorugh the ClientProtocol.getListings() method. The FileSystem public API will have two additional methods: getCreationTime and getModificationTime().

      The datanodes are completely transparent to this design and implementation and requires no change.

      1. 1377.patch
        54 kB
        Doug Cutting
      2. 1377-noctime.patch
        50 kB
        Doug Cutting
      3. CreationModificationTime.html
        8 kB
        dhruba borthakur
      4. CreationTime8.patch
        54 kB
        dhruba borthakur

        Issue Links

          Activity

          Hide
          Doug Cutting added a comment -

          This should build on the work of HADOOP-1298. In particular, the public API for reading creation and modification times should be through FileStatus.

          Show
          Doug Cutting added a comment - This should build on the work of HADOOP-1298 . In particular, the public API for reading creation and modification times should be through FileStatus.
          Hide
          dhruba borthakur added a comment -

          A design document for implementing creation time and modification time for files and directories.

          Show
          dhruba borthakur added a comment - A design document for implementing creation time and modification time for files and directories.
          Hide
          Doug Cutting added a comment -

          As I mentioned above, I'd prefer we use a FileStatus object that contains all file metadata, rather than adding a new FileSystem method per new metadata field added. If an application needs more than one metadata field from a file it should not invoke more than one RPC, nor should FileSystem implementations be forced to implement a cache. Currently HDFS does cache status internally, but that is fragile and will break when, e.g., modification times and lengths are permitted to change.

          Show
          Doug Cutting added a comment - As I mentioned above, I'd prefer we use a FileStatus object that contains all file metadata, rather than adding a new FileSystem method per new metadata field added. If an application needs more than one metadata field from a file it should not invoke more than one RPC, nor should FileSystem implementations be forced to implement a cache. Currently HDFS does cache status internally, but that is fragile and will break when, e.g., modification times and lengths are permitted to change.
          Hide
          dhruba borthakur added a comment -

          I like the idea of having an explicit call getFileStatus() that returns an object of type FileStatus as implemented in HADOOP-1298. Once that patch is submitted, I will make the corresponding changes to this implementation.

          Show
          dhruba borthakur added a comment - I like the idea of having an explicit call getFileStatus() that returns an object of type FileStatus as implemented in HADOOP-1298 . Once that patch is submitted, I will make the corresponding changes to this implementation.
          Hide
          dhruba borthakur added a comment -

          A first version of the path available for review. It introduces a new FileSystem call of the form:

          public FileStatus getFileStatus(Path f) throws IOException

          This API retrives the creation time and modification time of files (along with other file attributes).

          Would appreciate some review comments, especially on the API enhancement.

          Show
          dhruba borthakur added a comment - A first version of the path available for review. It introduces a new FileSystem call of the form: public FileStatus getFileStatus(Path f) throws IOException This API retrives the creation time and modification time of files (along with other file attributes). Would appreciate some review comments, especially on the API enhancement.
          Hide
          Konstantin Shvachko added a comment -

          DFSClient.java

          • Please avoid introducing methods with deprecated UTF8. String would be a better in this case.
            public DFSFileInfo getFileInfo(UTF8 src) throws IOException {

          FileStatus

          • all imports are redundant
          • I think that in terms of different file system compatibility FileStatus should
            be an interface. For the hdfs we should have a class HDFSFileStatus, which should be a part
            of DFSFileInfo combining all status fields. That way we will not need to change
            protocols and internal name-node interfaces when we add/modify status fields.
          • In the FileStatus constructor you are assigning blockSize to itself.
            this.blockSize = blockSize;
            I guess a parameter is missing.

          DfsPath

          • I am not sure whether HADOOP-1377 should be built on top of HADOOP-1298 or
            vise versa, but I agree with Doug that public api should use FileStatus.
            That is why DfsPath should introduce getFileStatus() rather than getters for each new field.

          FSDirectory

          • Rather than multiplying parameters for each method related to meta-data modification
            I would just add FileStatus as a parameter once.
          • getFileInfo should not be public and should not have UTF8 as a parameter
            public DFSFileInfo getFileInfo(UTF8 src) throws IOException {...}

            Same for FSNamesystem.getFileInfo().

          FSEdiLlog

          • loadFSEdits() has a lot of code replication, which deserves to be wrapped in
            separate method(s). I'd serialize entire HDFSFileStatus, which is Writable anyway.
            Same for FSImage, I'd serialize the entire FileStatus.
          Show
          Konstantin Shvachko added a comment - DFSClient.java Please avoid introducing methods with deprecated UTF8. String would be a better in this case. public DFSFileInfo getFileInfo(UTF8 src) throws IOException { FileStatus all imports are redundant I think that in terms of different file system compatibility FileStatus should be an interface. For the hdfs we should have a class HDFSFileStatus, which should be a part of DFSFileInfo combining all status fields. That way we will not need to change protocols and internal name-node interfaces when we add/modify status fields. In the FileStatus constructor you are assigning blockSize to itself. this.blockSize = blockSize; I guess a parameter is missing. DfsPath I am not sure whether HADOOP-1377 should be built on top of HADOOP-1298 or vise versa, but I agree with Doug that public api should use FileStatus. That is why DfsPath should introduce getFileStatus() rather than getters for each new field. FSDirectory Rather than multiplying parameters for each method related to meta-data modification I would just add FileStatus as a parameter once. getFileInfo should not be public and should not have UTF8 as a parameter public DFSFileInfo getFileInfo(UTF8 src) throws IOException {...} Same for FSNamesystem.getFileInfo(). FSEdiLlog loadFSEdits() has a lot of code replication, which deserves to be wrapped in separate method(s). I'd serialize entire HDFSFileStatus, which is Writable anyway. Same for FSImage, I'd serialize the entire FileStatus.
          Hide
          dhruba borthakur added a comment -

          1. Implemented Konstantin suggestion of making FileStatus an Interface (instead of an object).

          2. Dates are now printed as MMM dd yyyy HH:mm:ss

          Show
          dhruba borthakur added a comment - 1. Implemented Konstantin suggestion of making FileStatus an Interface (instead of an object). 2. Dates are now printed as MMM dd yyyy HH:mm:ss
          Hide
          dhruba borthakur added a comment -

          Removed deprecated UTF8 parameter to some methods. I left the minor duplication in code in FSEditLog as it was earlier.

          Show
          dhruba borthakur added a comment - Removed deprecated UTF8 parameter to some methods. I left the minor duplication in code in FSEditLog as it was earlier.
          Hide
          dhruba borthakur added a comment -

          Implemented most of Konstantin's suggestions. merged patch with latest trunk.

          Show
          dhruba borthakur added a comment - Implemented most of Konstantin's suggestions. merged patch with latest trunk.
          Hide
          Hadoop QA added a comment -

          -1, could not apply patch.

          The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12360062/CreationTime4.patch as a patch to trunk revision r548794.

          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/304/console

          Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.

          Show
          Hadoop QA added a comment - -1, could not apply patch. The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12360062/CreationTime4.patch as a patch to trunk revision r548794. Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/304/console Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.
          Hide
          dhruba borthakur added a comment -

          yet another patch that is merged with latest trunk.

          Show
          dhruba borthakur added a comment - yet another patch that is merged with latest trunk.
          Hide
          Doug Cutting added a comment -

          1. FsShell.java needs to import java.text.SimpleDateFormat.

          2. We should deprecate the FileSystem methods getReplication, getBlockSize, getLength and isDirectory. And these should no longer be abstract methods, but should be implemented in terms of getStatus().

          3. This issue should implement getStatus() for all FileSystem implementations, doing more than throwing an exception. These can be implemented to call the FileSystem getReplication, getBlockSize, isDirectory and getLength implementations. So only getCreationTime and getModificationTime cannot trivially be properly implemented. I'm not certain whether in these cases it is better to throw an exception or return zero.

          Show
          Doug Cutting added a comment - 1. FsShell.java needs to import java.text.SimpleDateFormat. 2. We should deprecate the FileSystem methods getReplication, getBlockSize, getLength and isDirectory. And these should no longer be abstract methods, but should be implemented in terms of getStatus(). 3. This issue should implement getStatus() for all FileSystem implementations, doing more than throwing an exception. These can be implemented to call the FileSystem getReplication, getBlockSize, isDirectory and getLength implementations. So only getCreationTime and getModificationTime cannot trivially be properly implemented. I'm not certain whether in these cases it is better to throw an exception or return zero.
          Hide
          dhruba borthakur added a comment -

          Incorporated Doug's review comments. InMemoryFileSystem, RawLocalFileSystem and S3FileSystem returns 0 for getCreationTime(filename) and getModificationTime(filename).

          Show
          dhruba borthakur added a comment - Incorporated Doug's review comments. InMemoryFileSystem, RawLocalFileSystem and S3FileSystem returns 0 for getCreationTime(filename) and getModificationTime(filename).
          Show
          Hadoop QA added a comment - +1 http://issues.apache.org/jira/secure/attachment/12360252/CreationTime6.patch applied and successfully tested against trunk revision r549284. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/315/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/315/console
          Hide
          Doug Cutting added a comment -

          Unfortunately this patch conflicts with HADOOP-1283, which I just committed.

          Also, in LocalFileSystem, RawLocalFileSystem and InMemoryFileSystem, the getLength(), isDirectory(), getBlockSize(), and getReplication() methods can be eliminated, with these method bodies copied to each class's status constructor. The implementations in FilterFileSystem.java can also be removed. So then the only implementations of these methods will be in FileSystem.java, for back-compatibility. Does that make sense?

          Show
          Doug Cutting added a comment - Unfortunately this patch conflicts with HADOOP-1283 , which I just committed. Also, in LocalFileSystem, RawLocalFileSystem and InMemoryFileSystem, the getLength(), isDirectory(), getBlockSize(), and getReplication() methods can be eliminated, with these method bodies copied to each class's status constructor. The implementations in FilterFileSystem.java can also be removed. So then the only implementations of these methods will be in FileSystem.java, for back-compatibility. Does that make sense?
          Hide
          dhruba borthakur added a comment -

          Merged patch with latest trunk. and incorporated Dougs' comments.

          Show
          dhruba borthakur added a comment - Merged patch with latest trunk. and incorporated Dougs' comments.
          Show
          Hadoop QA added a comment - +1 http://issues.apache.org/jira/secure/attachment/12360377/CreationTime8.patch applied and successfully tested against trunk revision r549624. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/323/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/323/console
          Hide
          Doug Cutting added a comment -

          The S3, in-memory and local status implementations can further improved. I'll attach a patch in a minute.

          Show
          Doug Cutting added a comment - The S3, in-memory and local status implementations can further improved. I'll attach a patch in a minute.
          Hide
          Doug Cutting added a comment -

          Improves status implementations for S2, in-memory and local filesystems. Also change format of FsShell listings to more closely match unix 'ls -l'. Dhruba, do these changes look reasonable to you?

          Show
          Doug Cutting added a comment - Improves status implementations for S2, in-memory and local filesystems. Also change format of FsShell listings to more closely match unix 'ls -l'. Dhruba, do these changes look reasonable to you?
          Hide
          Doug Cutting added a comment -

          One more thought on this: none of the filesystem will implement creation date, so I propose that we remove this feature from the API, supporting only modification date. That's all that java.io.File supports, and all that HDFS will support for some time. We can always add creation date later if we need it, but right now it's just unusable baggage. Any objections? If not, I'll attach a new version of the patch without creation date support shortly.

          Show
          Doug Cutting added a comment - One more thought on this: none of the filesystem will implement creation date, so I propose that we remove this feature from the API, supporting only modification date. That's all that java.io.File supports, and all that HDFS will support for some time. We can always add creation date later if we need it, but right now it's just unusable baggage. Any objections? If not, I'll attach a new version of the patch without creation date support shortly.
          Hide
          Konstantin Shvachko added a comment -
          • TestCreateModTime.java: Redundant imports, method, and variable declarations:
            import java.util.Collection;
            import java.util.ArrayList;
            import java.util.Iterator;
            import java.util.Date;
            63: private void checkFile(FileSystem fileSys, Path name, int repl)
            100: DistributedFileSystem dfs = (DistributedFileSystem) fileSys;
            132: long ctime2 = stat.getCreationTime();
            133: long mtime2 = stat.getModificationTime();
          • Classes RawLocalFileStatus, InMemoryFileStatus, and S3FileStatus have almost identical implementations.
            It makes sense to have one base class that provides a default FilesStatus implementation and make those three
            subclasses if the default is not good enough. The base class can be an inner class of FileSystem or the FileStatus
            itself can be declared as an abstract class instead of being an interface.
          • FSConstants: The comment describing changes related to the new layout version should be updated.
            // Current version:
            ...................
          • FSEditLog:
            fromLogTimeStamp(UTF8) is not used anywhere
          • DistributedFileSystem: Unused variable in getFileStatus()
            FileStatus stat = null;
          • FSDirectory:
            long modTime = namesystem.now();
            Should be accessed in a static way NameSystem.now()
          Show
          Konstantin Shvachko added a comment - TestCreateModTime.java: Redundant imports, method, and variable declarations: import java.util.Collection; import java.util.ArrayList; import java.util.Iterator; import java.util.Date; 63: private void checkFile(FileSystem fileSys, Path name, int repl) 100: DistributedFileSystem dfs = (DistributedFileSystem) fileSys; 132: long ctime2 = stat.getCreationTime(); 133: long mtime2 = stat.getModificationTime(); Classes RawLocalFileStatus, InMemoryFileStatus, and S3FileStatus have almost identical implementations. It makes sense to have one base class that provides a default FilesStatus implementation and make those three subclasses if the default is not good enough. The base class can be an inner class of FileSystem or the FileStatus itself can be declared as an abstract class instead of being an interface. FSConstants: The comment describing changes related to the new layout version should be updated. // Current version: ................... FSEditLog: fromLogTimeStamp(UTF8) is not used anywhere DistributedFileSystem: Unused variable in getFileStatus() FileStatus stat = null; FSDirectory: long modTime = namesystem.now(); Should be accessed in a static way NameSystem.now()
          Hide
          dhruba borthakur added a comment - - edited

          Hi Doug, thanks for your changes. They look good. +1.

          DFS implements CreationTime. In fact, since files are typically large in HDFS and it takes a while before all the data is written to a file (e.g. output of Reduce), the creation time and modification time of a file will not be the same. I think it is helpful to keep the implementation of CreationTime in the generic FileSystem API. As of now, only DistributedFileSystem implements CreationTime.

          Show
          dhruba borthakur added a comment - - edited Hi Doug, thanks for your changes. They look good. +1. DFS implements CreationTime. In fact, since files are typically large in HDFS and it takes a while before all the data is written to a file (e.g. output of Reduce), the creation time and modification time of a file will not be the same. I think it is helpful to keep the implementation of CreationTime in the generic FileSystem API. As of now, only DistributedFileSystem implements CreationTime.
          Hide
          Doug Cutting added a comment -

          But applications cannot rely on creation time as meaningful, since they cannot get it for the local filesystem. And for HDFS, it really only allows you to see how long it took to write a file. Is there an important use case where an application needs creation time, distinct from modified time?

          Show
          Doug Cutting added a comment - But applications cannot rely on creation time as meaningful, since they cannot get it for the local filesystem. And for HDFS, it really only allows you to see how long it took to write a file. Is there an important use case where an application needs creation time, distinct from modified time?
          Hide
          Doug Cutting added a comment -

          Here's a version that removes support for creation time.

          This also addresses all of Konstantin's issues save one: I didn't create a base class for FileStatus. Only the length field is common to all of these, and its implementation is simple enough that I don't think sharing code buys much--there's no real logic that's shared. I wouldn't oppose the use of a base class, but I don't think we'll suffer much without it in this case.

          Show
          Doug Cutting added a comment - Here's a version that removes support for creation time. This also addresses all of Konstantin's issues save one: I didn't create a base class for FileStatus. Only the length field is common to all of these, and its implementation is simple enough that I don't think sharing code buys much--there's no real logic that's shared. I wouldn't oppose the use of a base class, but I don't think we'll suffer much without it in this case.
          Hide
          Sameer Paranjpye added a comment -

          > One more thought on this: none of the filesystem will implement creation date, so I propose that we remove this feature from the API, supporting only modification date

          Creation date falls in the same category as things like block size and replication, which are also mostly unsupported, so maybe we could treat it the same way. Can we make it available in DFSFileStatus but not in the FileStatus interface?

          Show
          Sameer Paranjpye added a comment - > One more thought on this: none of the filesystem will implement creation date, so I propose that we remove this feature from the API, supporting only modification date Creation date falls in the same category as things like block size and replication, which are also mostly unsupported, so maybe we could treat it the same way. Can we make it available in DFSFileStatus but not in the FileStatus interface?
          Hide
          Doug Cutting added a comment -

          > Creation date falls in the same category as things like block size and replication [ ... ]

          We have code that uses block size and replication for important optimizations, so even though they're not universal, they have important use cases. But what is the use case for creation time as presently implemented? What will it enable that's difficult or impossible without it? Knowing the amount of time it took to write a file seems like trivia, not critical functionality.

          Show
          Doug Cutting added a comment - > Creation date falls in the same category as things like block size and replication [ ... ] We have code that uses block size and replication for important optimizations, so even though they're not universal, they have important use cases. But what is the use case for creation time as presently implemented? What will it enable that's difficult or impossible without it? Knowing the amount of time it took to write a file seems like trivia, not critical functionality.
          Hide
          dhruba borthakur added a comment -

          When HDFS supports "append" and "truncate", the difference between creation time and modification time might become more apparent. But you are right, I do not have a very strong case for implementing Creation Time.

          > We have code that uses block size and replication for important optimizations

          Can you pl point me to some piece of code that uses FileSystem.getReplication()? I thought that it was mostly for display purposes.

          Show
          dhruba borthakur added a comment - When HDFS supports "append" and "truncate", the difference between creation time and modification time might become more apparent. But you are right, I do not have a very strong case for implementing Creation Time. > We have code that uses block size and replication for important optimizations Can you pl point me to some piece of code that uses FileSystem.getReplication()? I thought that it was mostly for display purposes.
          Hide
          Doug Cutting added a comment -

          > When HDFS supports "append" and "truncate", the difference between creation time and modification time might become more apparent.

          Yes, and that might be a good time to add support for creation time. Until then, it's pretty useless, so why bother?

          > pl point me to some piece of code that uses FileSystem.getReplication()?

          We increase the replication of job.xml and job.jar in JobClient.java so that the datanodes that contain these files are not overwhelmed when a job first starts.

          Show
          Doug Cutting added a comment - > When HDFS supports "append" and "truncate", the difference between creation time and modification time might become more apparent. Yes, and that might be a good time to add support for creation time. Until then, it's pretty useless, so why bother? > pl point me to some piece of code that uses FileSystem.getReplication()? We increase the replication of job.xml and job.jar in JobClient.java so that the datanodes that contain these files are not overwhelmed when a job first starts.
          Hide
          Sameer Paranjpye added a comment -

          No, I don't have a particularly compelling use case for creation date. It can be dispensed with. Creation time isn't even POSIX, from 'man 2 stat'

          1. The time-related fields of struct stat are as follows:
            #
          2. st_atime Time when file data last accessed. Changed by the mknod(2),
          3. utimes(2) and read(2) system calls.
            #
          4. st_mtime Time when file data last modified. Changed by the mknod(2),
          5. utimes(2) and write(2) system calls.
            #
          6. st_ctime Time when file status was last changed (inode data modifica-
          7. tion). Changed by the chmod(2), chown(2), link(2),
          8. mknod(2), rename(2), unlink(2), utimes(2) and write(2) sys-
          9. tem calls.
            #

          We can implement something like, st_ctime later. It might be useful to have for accounting when we have users and permissions.

          Show
          Sameer Paranjpye added a comment - No, I don't have a particularly compelling use case for creation date. It can be dispensed with. Creation time isn't even POSIX, from 'man 2 stat' The time-related fields of struct stat are as follows: # st_atime Time when file data last accessed. Changed by the mknod(2), utimes(2) and read(2) system calls. # st_mtime Time when file data last modified. Changed by the mknod(2), utimes(2) and write(2) system calls. # st_ctime Time when file status was last changed (inode data modifica- tion). Changed by the chmod(2), chown(2), link(2), mknod(2), rename(2), unlink(2), utimes(2) and write(2) sys- tem calls. # We can implement something like, st_ctime later. It might be useful to have for accounting when we have users and permissions.
          Hide
          dhruba borthakur added a comment -

          Ok, I agree. let's submit this patch with Modification Time only (no Creation Time). And it saves us 8 bytes per file on the NameNode!

          +1.

          Show
          dhruba borthakur added a comment - Ok, I agree. let's submit this patch with Modification Time only (no Creation Time). And it saves us 8 bytes per file on the NameNode! +1.
          Hide
          Konstantin Shvachko added a comment -

          I agree there is no reason to measure time during which file was created, and therefore we need just one time stamp per file.
          But I would rather go with creation time rather than with modification. Modification time makes an impression files can be
          modified, which is not true. So we should introduce mod-time later when appends are implemented.

          Yes, in your implementation different FileStatus-es share only getLength() and getReplication().
          We can introduce the base class later if it will make sense.

          Show
          Konstantin Shvachko added a comment - I agree there is no reason to measure time during which file was created, and therefore we need just one time stamp per file. But I would rather go with creation time rather than with modification. Modification time makes an impression files can be modified, which is not true. So we should introduce mod-time later when appends are implemented. Yes, in your implementation different FileStatus-es share only getLength() and getReplication(). We can introduce the base class later if it will make sense.
          Hide
          Hadoop QA added a comment -
          Show
          Hadoop QA added a comment - +0, new Findbugs warnings http://issues.apache.org/jira/secure/attachment/12360382/1377-noctime.patch applied and successfully tested against trunk revision r549933, but there appear to be new Findbugs warnings introduced by this patch. New Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/324/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/324/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/324/console
          Hide
          Doug Cutting added a comment -

          > I would rather go with creation time rather than with modification.

          Except that java.io.File only implements modification, and our creation doesn't match the semantics of any posix concept. So I think modification is the one to support.

          Show
          Doug Cutting added a comment - > I would rather go with creation time rather than with modification. Except that java.io.File only implements modification, and our creation doesn't match the semantics of any posix concept. So I think modification is the one to support.
          Hide
          Doug Cutting added a comment -

          I just committed this. I fixed the single FindBugs warning. Thanks Dhruba!

          Show
          Doug Cutting added a comment - I just committed this. I fixed the single FindBugs warning. Thanks Dhruba!
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-Nightly #133 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/133/ )

            People

            • Assignee:
              dhruba borthakur
              Reporter:
              dhruba borthakur
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development