Hadoop Common
  1. Hadoop Common
  2. HADOOP-894

dfs client protocol should allow asking for parts of the block map

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.12.0
    • Fix Version/s: 0.14.0
    • Component/s: None
    • Labels:
      None

      Description

      I think that the HDFS client protocol should change like:

      /** The meta-data about a file that was opened. */
      class OpenFileInfo {
      /** the info for the first block */
      public LocatedBlockInfo getBlockInfo();
      public long getBlockSize();
      public long getLength();
      }

      interface ClientProtocol extends VersionedProtocol {
      public OpenFileInfo open(String name) throws IOException;
      /** get block info for any range of blocks */
      public LocatedBlockInfo[] getBlockInfo(String name, int blockOffset, int blockLength) throws IOException;
      }

      so that the client can decide how much block info to request and when. Currently, when the file is opened or an error occurs, the entire block list is requested and sent.

      1. partialBlockList2.patch
        50 kB
        Konstantin Shvachko
      2. partialBlockList3.patch
        72 kB
        Konstantin Shvachko
      3. partialBlockList6.patch
        72 kB
        Konstantin Shvachko

        Activity

        Hide
        Konstantin Shvachko added a comment -

        I understand the problem as that a lot of clients are opening the same file and read the first block of it,
        e.g. in streaming, and then each reads a specific part of the file. So each client does not need to receive
        a block map for the whole file, but rather needs to get block locations in a specified range.

        I propose to modify ClientProtocol.open() to
        OpenFileInfo open( String src, int numBlocks )
        where
        src - is the path;
        numBlocks - is the number of blocks, which locations the client wants to be calculated by the open()
        @returns
        OpenFileInfo : extends DFSFileInfo

        { LocatedBlock[ numBlocks ]; }

        DFSFileInfo contains file information including file length and replication.

        ClientProtocol should also contain
        public LocatedBlock[] getBlockLocations(String src, int offset, int length) throws IOException;
        offset - is the starting offset in the file
        length - is the number of bytes the client is supposed to read

        class LocatedBlock should include an additional field
        + long startFrom; which determines the offset within the block to the desired region of bytes.

        Then we will need to reimplement seeks and reads for DFSInputStream using that API.
        What would be a good default for the number of blocks that getBlockLocations()
        would fetch per call if the file is read from start to finish?

        Show
        Konstantin Shvachko added a comment - I understand the problem as that a lot of clients are opening the same file and read the first block of it, e.g. in streaming, and then each reads a specific part of the file. So each client does not need to receive a block map for the whole file, but rather needs to get block locations in a specified range. I propose to modify ClientProtocol.open() to OpenFileInfo open( String src, int numBlocks ) where src - is the path; numBlocks - is the number of blocks, which locations the client wants to be calculated by the open() @returns OpenFileInfo : extends DFSFileInfo { LocatedBlock[ numBlocks ]; } DFSFileInfo contains file information including file length and replication. ClientProtocol should also contain public LocatedBlock[] getBlockLocations(String src, int offset, int length) throws IOException; offset - is the starting offset in the file length - is the number of bytes the client is supposed to read class LocatedBlock should include an additional field + long startFrom; which determines the offset within the block to the desired region of bytes. Then we will need to reimplement seeks and reads for DFSInputStream using that API. What would be a good default for the number of blocks that getBlockLocations() would fetch per call if the file is read from start to finish?
        Hide
        Konstantin Shvachko added a comment -

        In this patch:

        • I included the list of LocatedBlock directly into DFSFileInfo, rather than overloading the class.
        • removed redundant members in DFSFileInfo
        • ClientProtocol.open(src, length) takes 2 parameters now: the file name and the length of the starting segment
          of the file for which block locations must be returned
        • Old open(src) is deprecated. I've seen many servlets used it directly. I replaced those calls by
          getBlockLocations() in hadoop servlets, but there might be others.
        • new ClientProtocol.getBlockLocations() method is introduced
        • DFSInputStream during initialization fetches only 10 blocks, subsequent blocks are requested and
          cached during the regular read().
        • pread first tries to use already cached blocks, then requests block locations from the name-node.
        • DFSClient.getHints() now calls getBlockLocations(), I removed redundant getHints() from ClientProocol and NameNode
        • many existing tests verify new functionality, I added one more case to TestPread, which ensures pread correctly
          reads both cached and uncached blocks.
        • checked style and checked JavaDoc.
        Show
        Konstantin Shvachko added a comment - In this patch: I included the list of LocatedBlock directly into DFSFileInfo, rather than overloading the class. removed redundant members in DFSFileInfo ClientProtocol.open(src, length) takes 2 parameters now: the file name and the length of the starting segment of the file for which block locations must be returned Old open(src) is deprecated. I've seen many servlets used it directly. I replaced those calls by getBlockLocations() in hadoop servlets, but there might be others. new ClientProtocol.getBlockLocations() method is introduced DFSInputStream during initialization fetches only 10 blocks, subsequent blocks are requested and cached during the regular read(). pread first tries to use already cached blocks, then requests block locations from the name-node. DFSClient.getHints() now calls getBlockLocations(), I removed redundant getHints() from ClientProocol and NameNode many existing tests verify new functionality, I added one more case to TestPread, which ensures pread correctly reads both cached and uncached blocks. checked style and checked JavaDoc.
        Hide
        Hadoop QA added a comment -

        -1, new javadoc warnings

        The javadoc tool appears to have generated warning messages when testing the latest attachment http://issues.apache.org/jira/secure/attachment/12356609/partialBlockList.patch against trunk revision r534624.

        Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/111/testReport/
        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/111/console

        Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.

        Show
        Hadoop QA added a comment - -1, new javadoc warnings The javadoc tool appears to have generated warning messages when testing the latest attachment http://issues.apache.org/jira/secure/attachment/12356609/partialBlockList.patch against trunk revision r534624. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/111/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/111/console Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.
        Hide
        dhruba borthakur added a comment -

        One issue we discussed earlier: The ClientProtocol open method used to take a path name. It was of the form:

        public LocatedBlock[] open(String src)

        This patch changes it to

        public DFSFileInfo open(String src, long length)

        The modified "open" API is not very intuitive because it is taking a "length" parameter. If we want to keep the ClientProtocol elegant and simple, we might want to remove the "length" parameter from call. The server is free to send back as many block locations as it deems fit. Typically, the server will be send one or two block locations.

        Show
        dhruba borthakur added a comment - One issue we discussed earlier: The ClientProtocol open method used to take a path name. It was of the form: public LocatedBlock[] open(String src) This patch changes it to public DFSFileInfo open(String src, long length) The modified "open" API is not very intuitive because it is taking a "length" parameter. If we want to keep the ClientProtocol elegant and simple, we might want to remove the "length" parameter from call. The server is free to send back as many block locations as it deems fit. Typically, the server will be send one or two block locations.
        Hide
        Konstantin Shvachko added a comment -

        Removed JavaDoc warning. Applied to the current trunk.

        Show
        Konstantin Shvachko added a comment - Removed JavaDoc warning. Applied to the current trunk.
        Hide
        Konstantin Shvachko added a comment -

        Yes, I can change the prototype to public DFSFileInfo open(String src) as Dhruba proposes.
        But then open() will always return 10 blocks, and if we decide to implement something that will require
        only one block or all blocks on open we will not be able to optimize that.
        So there is a trade off here functionality/flexibility vs simplicity.
        I vote for flexibility in the case.

        Show
        Konstantin Shvachko added a comment - Yes, I can change the prototype to public DFSFileInfo open(String src) as Dhruba proposes. But then open() will always return 10 blocks, and if we decide to implement something that will require only one block or all blocks on open we will not be able to optimize that. So there is a trade off here functionality/flexibility vs simplicity. I vote for flexibility in the case.
        Show
        Hadoop QA added a comment - +1 http://issues.apache.org/jira/secure/attachment/12356728/partialBlockList2.patch applied and successfully tested against trunk revision r534624. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/112/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/112/console
        Hide
        Doug Cutting added a comment -

        I think it's strange to put LocatedBlocks in DFSFileInfo. You're trying to optimize the protocol, so that a separate call isn't required to get the length, right? So let's make that explicit by returning the file length along with the list of blocks, rather than hacking DFSFileInfo.

        public LocatedBlocks

        { private LocatedBlock[] blocks; private long fileLength; }

        public LocatedBlocks getBlockLocations(String file, long start, long length);

        Then we don't need the open() method at all. getBlockLocations() replaces it altogether. This also has the benefit that someone can open a file in the middle with a single RPC.

        Show
        Doug Cutting added a comment - I think it's strange to put LocatedBlocks in DFSFileInfo. You're trying to optimize the protocol, so that a separate call isn't required to get the length, right? So let's make that explicit by returning the file length along with the list of blocks, rather than hacking DFSFileInfo. public LocatedBlocks { private LocatedBlock[] blocks; private long fileLength; } public LocatedBlocks getBlockLocations(String file, long start, long length); Then we don't need the open() method at all. getBlockLocations() replaces it altogether. This also has the benefit that someone can open a file in the middle with a single RPC.
        Hide
        Konstantin Shvachko added a comment -

        I was just about to comment that open(...) is a convenience call, which combines 2 calls
        DFSFileInfo getListing(src) and getBlockLocations(src, 0, length).
        DFSFileInfo.fileLength if the only field that is widely used in current implementation.
        So if folks can live without other fields like blockSize and blockReplication I am removing open().

        Show
        Konstantin Shvachko added a comment - I was just about to comment that open(...) is a convenience call, which combines 2 calls DFSFileInfo getListing(src) and getBlockLocations(src, 0, length). DFSFileInfo.fileLength if the only field that is widely used in current implementation. So if folks can live without other fields like blockSize and blockReplication I am removing open().
        Hide
        Sameer Paranjpye added a comment -

        I don't think we should remove open() just yet.

        Long term it would be nice to have the POSIX semantics of a files blocks not being removed while it is held open by a client even though the namespace entry for the file is removed. In this situation, a client calling open() on a file sets the expectation that it will need the files data until it either calls close() or loses it's lease. We'd need the open() call to track open files. I don't think getBlockLocations() alone is sufficient, it is ok to call getBlockLocations() in order to get placement information for scheduling without opening the file.

        Show
        Sameer Paranjpye added a comment - I don't think we should remove open() just yet. Long term it would be nice to have the POSIX semantics of a files blocks not being removed while it is held open by a client even though the namespace entry for the file is removed. In this situation, a client calling open() on a file sets the expectation that it will need the files data until it either calls close() or loses it's lease. We'd need the open() call to track open files. I don't think getBlockLocations() alone is sufficient, it is ok to call getBlockLocations() in order to get placement information for scheduling without opening the file.
        Hide
        Konstantin Shvachko added a comment -

        I looked at HADOOP-1298. Sounds like open() will need to return more metadata then it does now.
        I am planning to have DFSFileInfo open(src) - with one parameter, and remove open(src, length) as Dhruba described.
        And I'm planning to keep LocatedBlock list inside DFSFileInfo.

        Show
        Konstantin Shvachko added a comment - I looked at HADOOP-1298 . Sounds like open() will need to return more metadata then it does now. I am planning to have DFSFileInfo open(src) - with one parameter, and remove open(src, length) as Dhruba described. And I'm planning to keep LocatedBlock list inside DFSFileInfo.
        Hide
        Doug Cutting added a comment -

        When we open a file we don't need anything in return except the length, since we can call getBlockLocations() afterwards. If we want some block locations returned from open(), as an optimization, then we should pass a start and length, giving the range of the file whose blocks we'd initially like, and return those with the length. HADOOP-1298 will add more fields to DFSFileInfo, things we don't need when opening. So HADOOP-1298 argues that we should not return a DFSFileInfo at open. Also, other users of DFSFileInfo don't need a LocatedBlockList, so I really don't think it belongs there.

        Show
        Doug Cutting added a comment - When we open a file we don't need anything in return except the length, since we can call getBlockLocations() afterwards. If we want some block locations returned from open(), as an optimization, then we should pass a start and length, giving the range of the file whose blocks we'd initially like, and return those with the length. HADOOP-1298 will add more fields to DFSFileInfo, things we don't need when opening. So HADOOP-1298 argues that we should not return a DFSFileInfo at open. Also, other users of DFSFileInfo don't need a LocatedBlockList, so I really don't think it belongs there.
        Hide
        Sameer Paranjpye added a comment -

        Adding 'start' and 'length' parameters to the Namenodes 'open' RPC doesn't seem to add a lot of value. It won't be used unless we expose it through fs.FileSystem or dfs.DistributedFileSystem and adding an 'open and seek' kind of call just seems like API bloat.

        On the other hand, having the locations of the first few block of a file is useful in many cases. In particular when a client is working with small files or wants to read the files header before seeking (as MR tasks processing sequence files do). Why not just have open default to returning the first few block locations?

        Show
        Sameer Paranjpye added a comment - Adding 'start' and 'length' parameters to the Namenodes 'open' RPC doesn't seem to add a lot of value. It won't be used unless we expose it through fs.FileSystem or dfs.DistributedFileSystem and adding an 'open and seek' kind of call just seems like API bloat. On the other hand, having the locations of the first few block of a file is useful in many cases. In particular when a client is working with small files or wants to read the files header before seeking (as MR tasks processing sequence files do). Why not just have open default to returning the first few block locations?
        Hide
        Konstantin Shvachko added a comment -

        Summarizing:

        • LocatedBlocks open(String src);
        • LocatedBlocks getBlockLocations(String file, long start, long length);
        • open() always returns first 10 blocks as decided by the name-node.

        Does that work for everybody?

        Show
        Konstantin Shvachko added a comment - Summarizing: LocatedBlocks open(String src); LocatedBlocks getBlockLocations(String file, long start, long length); open() always returns first 10 blocks as decided by the name-node. Does that work for everybody?
        Hide
        Doug Cutting added a comment -

        Sameer: you're right, our current public API would not take advantage of an open with start and length, so it may be overkill. And in many cases we also read a file header from the first block before we seek anyway. Long-term, this might be a good optimization, to be able to open a file directly at a position, without touching the first block, and to be able to disable the reading of headers. It would be convenient if this did not require changes to both the protocol and to the server, but instead only on the client. To me, open(start,length) is a more general API that's no harder to implement than open(length), one that's future compatible. The client would, for now, always pass zero for 'start'. But I wouldn't veto open(length). That's also a fine API and is more minimal, a good thing.

        Show
        Doug Cutting added a comment - Sameer: you're right, our current public API would not take advantage of an open with start and length, so it may be overkill. And in many cases we also read a file header from the first block before we seek anyway. Long-term, this might be a good optimization, to be able to open a file directly at a position, without touching the first block, and to be able to disable the reading of headers. It would be convenient if this did not require changes to both the protocol and to the server, but instead only on the client. To me, open(start,length) is a more general API that's no harder to implement than open(length), one that's future compatible. The client would, for now, always pass zero for 'start'. But I wouldn't veto open(length). That's also a fine API and is more minimal, a good thing.
        Hide
        Doug Cutting added a comment -

        Konstantin: Does LocatedBlocks contain the length of the file? We need that too, don't we? Also, why have an open() method at all, rather than just using open(start,length), letting the client pass start=0 and length=$

        {hdfs.initial.bytes}

        ?

        Show
        Doug Cutting added a comment - Konstantin: Does LocatedBlocks contain the length of the file? We need that too, don't we? Also, why have an open() method at all, rather than just using open(start,length), letting the client pass start=0 and length=$ {hdfs.initial.bytes} ?
        Hide
        Konstantin Shvachko added a comment -

        Yes. LocatedBlocks contains file length and a List of block locations.
        I initially implemented open(src, length) because it is more general, and deprecated old open(src).
        Dhruba finds it "not very intuitive" and Sameer says it does not "add a lot of value".

        I cannot implement open(start,length) with the start > 0 right now, because in order to do that I will
        need to write an offset-to-block map for cached blocks in the client. I was planning to do it in the next
        iteration, but it was supposed to be used mostly in pread() that is for getBlockLocations(), not in open().

        I don't see how we can benefit from introducing the start parameter, but I definitely support adding length.
        So currently it's a tie 2:2. We need more votes to resolve the issue.

        Show
        Konstantin Shvachko added a comment - Yes. LocatedBlocks contains file length and a List of block locations. I initially implemented open(src, length) because it is more general, and deprecated old open(src). Dhruba finds it "not very intuitive" and Sameer says it does not "add a lot of value". I cannot implement open(start,length) with the start > 0 right now, because in order to do that I will need to write an offset-to-block map for cached blocks in the client. I was planning to do it in the next iteration, but it was supposed to be used mostly in pread() that is for getBlockLocations(), not in open(). I don't see how we can benefit from introducing the start parameter, but I definitely support adding length. So currently it's a tie 2:2. We need more votes to resolve the issue.
        Hide
        Sameer Paranjpye added a comment -

        > It would be convenient if this did not require changes to both the protocol and to the server, but instead only on the client. To me, open(start,length) is a more general
        > API that's no harder to implement than open(length), one that's future compatible. The client would, for now, always pass zero for 'start'.

        Fair enough, open(start, length) is more general and future compatible and we should implement it. The public APIs don't change for now and the client always passes 0 for start and $

        {hdfs.initial.bytes}

        for length. Maybe we use a default of 256MB and get the first 2-8 blocks depending on which of the common block sizes (32, 64 or 128MB) applies to the file.

        Show
        Sameer Paranjpye added a comment - > It would be convenient if this did not require changes to both the protocol and to the server, but instead only on the client. To me, open(start,length) is a more general > API that's no harder to implement than open(length), one that's future compatible. The client would, for now, always pass zero for 'start'. Fair enough, open(start, length) is more general and future compatible and we should implement it. The public APIs don't change for now and the client always passes 0 for start and $ {hdfs.initial.bytes} for length. Maybe we use a default of 256MB and get the first 2-8 blocks depending on which of the common block sizes (32, 64 or 128MB) applies to the file.
        Hide
        Konstantin Shvachko added a comment -

        hdfs.initial.bytes - is it a configuration parameter?

        Show
        Konstantin Shvachko added a comment - hdfs.initial.bytes - is it a configuration parameter?
        Hide
        Doug Cutting added a comment -

        > hdfs.initial.bytes - is it a configuration parameter?

        Yes, and it probably needs a better name.

        Show
        Doug Cutting added a comment - > hdfs.initial.bytes - is it a configuration parameter? Yes, and it probably needs a better name.
        Hide
        Konstantin Shvachko added a comment -

        Do we really want it configurable? I was trying to avoid that. In my view the parameter is not significant enough
        in order to include it into the configuration. I currently use a constant instead.

        Show
        Konstantin Shvachko added a comment - Do we really want it configurable? I was trying to avoid that. In my view the parameter is not significant enough in order to include it into the configuration. I currently use a constant instead.
        Hide
        Sameer Paranjpye added a comment -

        No necessarily, we might want to start out with a reasonable default and introduce a configuration variable when it appears to be needed.

        Show
        Sameer Paranjpye added a comment - No necessarily, we might want to start out with a reasonable default and introduce a configuration variable when it appears to be needed.
        Hide
        Doug Cutting added a comment -

        > start out with a reasonable default and introduce a configuration variable when it appears to be needed

        Two other options:

        1. Make it configurable but don't document it in hadoop-default.xml.

        2. Make it configurable but document it as an "expert" parameter. (We should really go through hadoop-default.xml and mark things that most folks should leave alone as expert.)

        Show
        Doug Cutting added a comment - > start out with a reasonable default and introduce a configuration variable when it appears to be needed Two other options: 1. Make it configurable but don't document it in hadoop-default.xml. 2. Make it configurable but document it as an "expert" parameter. (We should really go through hadoop-default.xml and mark things that most folks should leave alone as expert.)
        Hide
        Konstantin Shvachko added a comment -

        More precisely, the length that I pass to open() is 10 *

        {dfs.block.size}

        , that is 10 default block sizes.
        So it is in a sense configurable, but not as a separate parameter.

        Show
        Konstantin Shvachko added a comment - More precisely, the length that I pass to open() is 10 * {dfs.block.size} , that is 10 default block sizes. So it is in a sense configurable, but not as a separate parameter.
        Hide
        Doug Cutting added a comment -

        > the length that I pass to open() is 10 *

        {dfs.block.size}

        It's too bad we don't support expressions in config files.. In the meantime, adding it as a config variable with no value in hadoop-default.xml, or a commented-out value. Perhaps we should change Configuration so that if the value for a numeric field is "" then the default is used...

        Also, 2 would be a better default for mapreduce inputs.

        Show
        Doug Cutting added a comment - > the length that I pass to open() is 10 * {dfs.block.size} It's too bad we don't support expressions in config files.. In the meantime, adding it as a config variable with no value in hadoop-default.xml, or a commented-out value. Perhaps we should change Configuration so that if the value for a numeric field is "" then the default is used... Also, 2 would be a better default for mapreduce inputs.
        Hide
        Konstantin Shvachko added a comment -

        In this patch:

        • open takes three parameters open(src, offset, length)
        • there is an undocumented config parameter "dfs.read.prefetch.size" that defines the range within which
          we want all block locations to be fetch from the name-node during the open call.
        • I kept 10*defaultBlockSize as the default, because 2 vs 10 does not improve much communication or name-node
          performance, but in most cases 10 will be ALL blocks for the majority of files.
        • Implemented block location caching for reads and preads.
        • Included more test cases in TestPread
        Show
        Konstantin Shvachko added a comment - In this patch: open takes three parameters open(src, offset, length) there is an undocumented config parameter "dfs.read.prefetch.size" that defines the range within which we want all block locations to be fetch from the name-node during the open call. I kept 10*defaultBlockSize as the default, because 2 vs 10 does not improve much communication or name-node performance, but in most cases 10 will be ALL blocks for the majority of files. Implemented block location caching for reads and preads. Included more test cases in TestPread
        Hide
        Hadoop QA added a comment -

        -1, could not apply patch.

        The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12357430/partialBlockList3.patch as a patch to trunk revision r538318.

        Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/145/console

        Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.

        Show
        Hadoop QA added a comment - -1, could not apply patch. The patch command could not apply the latest attachment http://issues.apache.org/jira/secure/attachment/12357430/partialBlockList3.patch as a patch to trunk revision r538318. Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/145/console Please note that this message is automatically generated and may represent a problem with the automation system and not the patch.
        Hide
        Konstantin Shvachko added a comment -

        Synchronized with the trunk.

        Show
        Konstantin Shvachko added a comment - Synchronized with the trunk.
        Show
        Hadoop QA added a comment - +1 http://issues.apache.org/jira/secure/attachment/12357448/partialBlockList4.patch applied and successfully tested against trunk revision r538318. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/148/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/148/console
        Hide
        Nigel Daley added a comment -

        Moving to 0.14 as this is not a bug.

        Show
        Nigel Daley added a comment - Moving to 0.14 as this is not a bug.
        Hide
        Doug Cutting added a comment -

        This patch no longer cleanly applies to trunk. Sorry! Can you please update it. Thanks!

        Show
        Doug Cutting added a comment - This patch no longer cleanly applies to trunk. Sorry! Can you please update it. Thanks!
        Hide
        Konstantin Shvachko added a comment -

        Updated the patch.

        Show
        Konstantin Shvachko added a comment - Updated the patch.
        Show
        Hadoop QA added a comment - +0 http://issues.apache.org/jira/secure/attachment/12357807/partialBlockList5.patch applied and successfully tested against trunk revision r540271, but there appear to be new Findbugs warnings introduced by this patch. Findbugs output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/171/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/171/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/171/console
        Hide
        Konstantin Shvachko added a comment -

        Fixed FindBugs warnings.
        This was actually useful.

        Show
        Konstantin Shvachko added a comment - Fixed FindBugs warnings. This was actually useful.
        Show
        Hadoop QA added a comment - +1 http://issues.apache.org/jira/secure/attachment/12357824/partialBlockList6.patch applied and successfully tested against trunk revision r540359. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/173/testReport/ Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/173/console
        Hide
        Doug Cutting added a comment -

        I just committed this. Thanks, Konstantin!

        Show
        Doug Cutting added a comment - I just committed this. Thanks, Konstantin!
        Hide
        Hadoop QA added a comment -
        Show
        Hadoop QA added a comment - Integrated in Hadoop-Nightly #98 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/98/ )

          People

          • Assignee:
            Konstantin Shvachko
            Reporter:
            Owen O'Malley
          • Votes:
            1 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development