HBase
  1. HBase
  2. HBASE-5757

TableInputFormat should handle as many errors as possible

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.90.6
    • Fix Version/s: 0.94.1, 0.95.0
    • Component/s: mapreduce
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Prior to HBASE-4196 there was different handling of IOExceptions thrown from scanner in mapred and mapreduce API. The patch to HBASE-4196 unified this handling so that if exception is caught a reconnect is attempted (without bothering the mapred client). After that, HBASE-4269 changed this behavior back, but in both mapred and mapreduce APIs. The question is, is there any reason not to handle all errors that the input format can handle? In other words, why not try to reissue the request after any IOException? I see the following disadvantages of current approach

      • the client may see exceptions like LeaseException and ScannerTimeoutException if he fails to process all fetched data in timeout
      • to avoid ScannerTimeoutException the client must raise hbase.regionserver.lease.period
      • timeouts for tasks is aready configured in mapred.task.timeout, so this seems to me a bit redundant, because typically one needs to update both these parameters
      • I don't see any possibility to get rid of LeaseException (this is configured on server side)

      I think all of these issues would be gone, if the DoNotRetryIOException would not be rethrown. On the other hand, handling errors in InputFormat has disadvantage, that it may hide from the user some inefficiency. Eg. if I have very big scanner.caching, and I manage to process only a few rows in timeout, I will end up with single row being fetched many times (and will not be explicitly notified about this). Could we solve this problem by adding some counter to the InputFormat?

      1. hbase-5757-92.patch
        7 kB
        Jonathan Hsieh
      2. 5757-trunk-v2.txt
        9 kB
        Ted Yu
      3. HBASE-5757-trunk-r1341041.patch
        9 kB
        Jan Lukavsky
      4. HBASE-5757.patch
        9 kB
        Jan Lukavsky
      5. HBASE-5757.patch
        2 kB
        Jan Lukavsky

        Issue Links

          Activity

          Jan Lukavsky created issue -
          Hide
          Jan Lukavsky added a comment -

          The problem with multiple fetching of rows doesn't exist. I thought (don't know why) that ScannerTimeoutException can be thrown while processing rows cached in the scanner on client side. This is not the case. Adding counter for the number of retries in the input format might be interesting nevertheless.

          Show
          Jan Lukavsky added a comment - The problem with multiple fetching of rows doesn't exist. I thought (don't know why) that ScannerTimeoutException can be thrown while processing rows cached in the scanner on client side. This is not the case. Adding counter for the number of retries in the input format might be interesting nevertheless.
          Jan Lukavsky made changes -
          Field Original Value New Value
          Description Prior to HBASE-4196 there was different handling of IOExceptions thrown from scanner in mapred and mapreduce API. The patch to HBASE-4196 unified this handling so that if exception is caught a reconnect is attempted (without bothering the mapred client). After that, HBASE-4269 changed this behavior back, but in both mapred and mapreduce APIs. The question is, is there any reason not to handle all errors that the input format can handle? In other words, why not try to reissue the request after *any* IOException? I see the following disadvantages of current approach
           * the client may see exceptions like LeaseException and ScannerTimeoutException if he fails to process all fetched data in timeout
           * to avoid ScannerTimeoutException the client must raise hbase.regionserver.lease.period
           * timeouts for tasks is aready configured in mapred.task.timeout, so this seems to me a bit redundant, because typically one needs to update both these parameters
           * I don't see any possibility to get rid of LeaseException (this is configured on server side)

          I think all of these issues would be gone, if the DoNotRetryIOException would not be rethrown. On the other hand, handling errors in InputFormat has disadvantage, that it may hide from the user some inefficiency. Eg. if I have very big scanner.caching, and I manage to process only a few rows in timeout, I will end up with single row being fetched many times (and will not be explicitly notified about this). Could we solve this problem by adding some counter to the InputFormat?
          Prior to HBASE-4196 there was different handling of IOExceptions thrown from scanner in mapred and mapreduce API. The patch to HBASE-4196 unified this handling so that if exception is caught a reconnect is attempted (without bothering the mapred client). After that, HBASE-4269 changed this behavior back, but in both mapred and mapreduce APIs. The question is, is there any reason not to handle all errors that the input format can handle? In other words, why not try to reissue the request after *any* IOException? I see the following disadvantages of current approach
           * the client may see exceptions like LeaseException and ScannerTimeoutException if he fails to process all fetched data in timeout
           * to avoid ScannerTimeoutException the client must raise hbase.regionserver.lease.period
           * timeouts for tasks is aready configured in mapred.task.timeout, so this seems to me a bit redundant, because typically one needs to update both these parameters
           * I don't see any possibility to get rid of LeaseException (this is configured on server side)

          I think all of these issues would be gone, if the DoNotRetryIOException would not be rethrown. -On the other hand, handling errors in InputFormat has disadvantage, that it may hide from the user some inefficiency. Eg. if I have very big scanner.caching, and I manage to process only a few rows in timeout, I will end up with single row being fetched many times (and will not be explicitly notified about this). Could we solve this problem by adding some counter to the InputFormat?-
          Hide
          Jan Lukavsky added a comment -

          Attaching very simple patch with no test modifications. This is functional for us (not tested the mapred API). Although, no counter for the restarts is added.

          Show
          Jan Lukavsky added a comment - Attaching very simple patch with no test modifications. This is functional for us (not tested the mapred API). Although, no counter for the restarts is added.
          Jan Lukavsky made changes -
          Attachment HBASE-5757.patch [ 12522222 ]
          Jan Lukavsky made changes -
          Summary TableInputFormat should handle as much errors as possible TableInputFormat should handle as many errors as possible
          Hide
          Jonathan Hsieh added a comment -

          Jan,

          I looked that the logic again I think your are right. When I did a quick glance last time I only saw the isolated patch and didn't see enough context to see the existing retry logic. (review board is helpful).

          Mind adding some comments explaining why this is ok to retry? (We are retrying once and if we fail twice we give up). It seems strange to me that we are retrying something that throws a DoNotRetyIOException.

          Anyone else have any comments?

          Show
          Jonathan Hsieh added a comment - Jan, I looked that the logic again I think your are right. When I did a quick glance last time I only saw the isolated patch and didn't see enough context to see the existing retry logic. (review board is helpful). Mind adding some comments explaining why this is ok to retry? (We are retrying once and if we fail twice we give up). It seems strange to me that we are retrying something that throws a DoNotRetyIOException. Anyone else have any comments?
          Hide
          Jan Lukavsky added a comment -

          Hi Jon,

          I'm not sure, but IMO the purpose of DoNotRetryIOException is to instruct the HTable client not to retry the request. In TableInputFormat we are working on higher level, so retrying is OK. DNRIOEx is to distinguish exceptions that might be caused by region reassignment for instance, and that might disappear if the request is resent (and possibly dropping the cached region location and quering .META. again). UnknonwnScannerException on the other hand will not 'disapper' if the same request is sent by HTable client. But in the InputFormat we can restart the scanner, and so we will not send the same request, hence it can succeed.

          Retrying the request just once and then giving up is to avoid infinite cycles, and mostly it suffices to retry just once, because a typical cause of the UnknownScannerException or LeaseException is too slow Mapper (there could be other like scanning for too sparse column, but this will not be solved by this issue ). There is possibility to lower scanner caching, but this might be inefficient (eg. when the 99.99% of time the caching is just OK, and then there exists some strange records, that take the Mapper longer to process). Lowering the caching globally just because of these few records doesn't sound like the 'correct' solution.

          Show
          Jan Lukavsky added a comment - Hi Jon, I'm not sure, but IMO the purpose of DoNotRetryIOException is to instruct the HTable client not to retry the request. In TableInputFormat we are working on higher level, so retrying is OK. DNRIOEx is to distinguish exceptions that might be caused by region reassignment for instance, and that might disappear if the request is resent (and possibly dropping the cached region location and quering .META. again). UnknonwnScannerException on the other hand will not 'disapper' if the same request is sent by HTable client. But in the InputFormat we can restart the scanner, and so we will not send the same request, hence it can succeed. Retrying the request just once and then giving up is to avoid infinite cycles, and mostly it suffices to retry just once, because a typical cause of the UnknownScannerException or LeaseException is too slow Mapper (there could be other like scanning for too sparse column, but this will not be solved by this issue ). There is possibility to lower scanner caching, but this might be inefficient (eg. when the 99.99% of time the caching is just OK, and then there exists some strange records, that take the Mapper longer to process). Lowering the caching globally just because of these few records doesn't sound like the 'correct' solution.
          Jonathan Hsieh made changes -
          Link This issue is related to HBASE-2161 [ HBASE-2161 ]
          Hide
          Jonathan Hsieh added a comment -

          Got it, great clarification on the DNRIOExn. Can you add this in the comments of the catch block in TableInputFormat? if it passes tests than I'll commit. If you could add a hadoop counter that be awesome (or file a jira to add one).

          I have a feeling there might be a configuration work around. Are you using scanner caching at all on your client? (default is no caching). Seems like there would be a sweet spot above witch there is diminishing returns. It sounds like in your case your rows may be variably sized making this difficult.

          Note that we've been able to can set scanner caching on each individual scan in since 0.20 (HBASE-1759) – setting it for that job may be more 'correct'.

          Also it looks like some of this code could go for a cleanup – HBASE-2161 is another jira that says ScannerTimeoutException may be cruft – why is it separate from LeaseException? (possibly related to ). I think I would prefer if we explicitly call out the exceptions (UnknownScannerException, LeaseException and ScannerTimeoutException) that we retry on and leave out the rest to be rethrown (there was a recent thread dicussing IOException abuse).

          Show
          Jonathan Hsieh added a comment - Got it, great clarification on the DNRIOExn. Can you add this in the comments of the catch block in TableInputFormat? if it passes tests than I'll commit. If you could add a hadoop counter that be awesome (or file a jira to add one). I have a feeling there might be a configuration work around. Are you using scanner caching at all on your client? (default is no caching). Seems like there would be a sweet spot above witch there is diminishing returns. It sounds like in your case your rows may be variably sized making this difficult. Note that we've been able to can set scanner caching on each individual scan in since 0.20 ( HBASE-1759 ) – setting it for that job may be more 'correct'. Also it looks like some of this code could go for a cleanup – HBASE-2161 is another jira that says ScannerTimeoutException may be cruft – why is it separate from LeaseException? (possibly related to ). I think I would prefer if we explicitly call out the exceptions (UnknownScannerException, LeaseException and ScannerTimeoutException) that we retry on and leave out the rest to be rethrown (there was a recent thread dicussing IOException abuse).
          Jonathan Hsieh made changes -
          Link This issue is related to HBASE-5973 [ HBASE-5973 ]
          Hide
          Jan Lukavsky added a comment -

          Attaching patch including modified tests (pass on my box) and counter in the new API.

          Show
          Jan Lukavsky added a comment - Attaching patch including modified tests (pass on my box) and counter in the new API.
          Jan Lukavsky made changes -
          Attachment HBASE-5757.patch [ 12527262 ]
          Hide
          Jan Lukavsky added a comment -

          Note that we've been able to can set scanner caching on each individual scan in since 0.20 (HBASE-1759) – setting it for that job may be more 'correct'.

          We are setting different caching for different jobs, the problem is that the rows may take different time to process (based on job) and this cannot be told in advance. Currently, it is only possible to set the caching for the whole job, but even if it was possible to change the caching during the job, we would not know that we need to do it before we will get the ScannerTimeoutException. So handling this error in the TableInputFormat seems right solution to me.

          Show
          Jan Lukavsky added a comment - Note that we've been able to can set scanner caching on each individual scan in since 0.20 ( HBASE-1759 ) – setting it for that job may be more 'correct'. We are setting different caching for different jobs, the problem is that the rows may take different time to process (based on job) and this cannot be told in advance. Currently, it is only possible to set the caching for the whole job, but even if it was possible to change the caching during the job, we would not know that we need to do it before we will get the ScannerTimeoutException. So handling this error in the TableInputFormat seems right solution to me.
          Hide
          Ted Yu added a comment -

          @Jan:
          Neither patch applies to trunk as of today.
          Can you attach patch for trunk and name it accordingly ?

          Thanks

          Show
          Ted Yu added a comment - @Jan: Neither patch applies to trunk as of today. Can you attach patch for trunk and name it accordingly ? Thanks
          Hide
          Jan Lukavsky added a comment -

          There was conflicting commit to patch for HBASE-6004. Merged this patch, the new one should apply to revision 1341041.

          Show
          Jan Lukavsky added a comment - There was conflicting commit to patch for HBASE-6004 . Merged this patch, the new one should apply to revision 1341041.
          Jan Lukavsky made changes -
          Attachment HBASE-5757-trunk-r1341041.patch [ 12528434 ]
          Ted Yu made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12528434/HBASE-5757-trunk-r1341041.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 hadoop23. The patch compiles against the hadoop 0.23.x profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 33 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.coprocessor.TestClassLoading
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster
          org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
          org.apache.hadoop.hbase.replication.TestMasterReplication

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528434/HBASE-5757-trunk-r1341041.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 hadoop23. The patch compiles against the hadoop 0.23.x profile. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 33 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestClassLoading org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.regionserver.TestSplitTransactionOnCluster org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.replication.TestMasterReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1944//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          I ran the following two tests and they passed with the latest patch:

            518  mt -Dtest=TestClassLoading
            519  mt -Dtest=TestSplitTransactionOnCluster
          

          The replication tests have been failing and are not related to this change.

          Minor comments:

          +        // try to handle exceptions all possible exceptions by restarting
          

          The first 'exceptions ' should be removed.

          Show
          Ted Yu added a comment - I ran the following two tests and they passed with the latest patch: 518 mt -Dtest=TestClassLoading 519 mt -Dtest=TestSplitTransactionOnCluster The replication tests have been failing and are not related to this change. Minor comments: + // try to handle exceptions all possible exceptions by restarting The first 'exceptions ' should be removed.
          Hide
          Ted Yu added a comment -

          Patch v2 changes the comments w.r.t. exceptions being handled.

          @Jon:
          Do you have further comments ?

          Show
          Ted Yu added a comment - Patch v2 changes the comments w.r.t. exceptions being handled. @Jon: Do you have further comments ?
          Ted Yu made changes -
          Attachment 5757-trunk-v2.txt [ 12528448 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12528448/5757-trunk-v2.txt
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 hadoop23. The patch compiles against the hadoop 0.23.x profile.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 33 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.coprocessor.TestClassLoading
          org.apache.hadoop.hbase.replication.TestReplication
          org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
          org.apache.hadoop.hbase.regionserver.wal.TestHLog
          org.apache.hadoop.hbase.replication.TestMasterReplication

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528448/5757-trunk-v2.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 hadoop23. The patch compiles against the hadoop 0.23.x profile. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 33 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.coprocessor.TestClassLoading org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.regionserver.wal.TestHLog org.apache.hadoop.hbase.replication.TestMasterReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1945//console This message is automatically generated.
          Hide
          Jonathan Hsieh added a comment -

          Zhihong, thanks for pinging me about this. Jan, thanks for being patient with me on this.

          The changes look good. Patch applies to 0.94 and trunk. I believe the request was for getting this into 0.90 – I'll look into backporting this behavior back into that version.

          Show
          Jonathan Hsieh added a comment - Zhihong, thanks for pinging me about this. Jan, thanks for being patient with me on this. The changes look good. Patch applies to 0.94 and trunk. I believe the request was for getting this into 0.90 – I'll look into backporting this behavior back into that version.
          Jonathan Hsieh made changes -
          Assignee Jan Lukavsky [ je.ik ]
          Hide
          Ted Yu added a comment -

          TestHLog failure was caused by:

          java.net.BindException: Problem binding to localhost/127.0.0.1:41331 : Address already in use
          	at org.apache.hadoop.ipc.Server.bind(Server.java:227)
          	at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
          

          I ran it locally and it passed.

          Show
          Ted Yu added a comment - TestHLog failure was caused by: java.net.BindException: Problem binding to localhost/127.0.0.1:41331 : Address already in use at org.apache.hadoop.ipc.Server.bind(Server.java:227) at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301) I ran it locally and it passed.
          Hide
          Jonathan Hsieh added a comment -

          Committed to 0.94 and 0.96.

          Show
          Jonathan Hsieh added a comment - Committed to 0.94 and 0.96.
          Jonathan Hsieh made changes -
          Fix Version/s 0.96.0 [ 12320040 ]
          Fix Version/s 0.94.1 [ 12320257 ]
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK #2911 (See https://builds.apache.org/job/HBase-TRUNK/2911/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341132)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK #2911 (See https://builds.apache.org/job/HBase-TRUNK/2911/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341132) Result = FAILURE jmhsieh : Files : /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Hide
          Hudson added a comment -

          Integrated in HBase-0.94 #205 (See https://builds.apache.org/job/HBase-0.94/205/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341133)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-0.94 #205 (See https://builds.apache.org/job/HBase-0.94/205/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341133) Result = FAILURE jmhsieh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Hide
          Jonathan Hsieh added a comment -

          hbase-5757-92.patch is for 0.92 and 0.90 versions. Underlaying metrics have changed so it does not update metrics like in 0.94 or trunk/0.96. It does however include the updated tests that demonstrated updated semantics.

          Show
          Jonathan Hsieh added a comment - hbase-5757-92.patch is for 0.92 and 0.90 versions. Underlaying metrics have changed so it does not update metrics like in 0.94 or trunk/0.96. It does however include the updated tests that demonstrated updated semantics.
          Jonathan Hsieh made changes -
          Attachment hbase-5757-92.patch [ 12528472 ]
          Hide
          Jonathan Hsieh added a comment -

          Zhihong, Jan, if the 0.92/0.90 versions looks good to you I will commit.

          Show
          Jonathan Hsieh added a comment - Zhihong, Jan, if the 0.92/0.90 versions looks good to you I will commit.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12528472/hbase-5757-92.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1946//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12528472/hbase-5757-92.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1946//console This message is automatically generated.
          Hide
          Ted Yu added a comment -

          TestTableInputFormat passed in 0.92 with 0.92 patch.

          +1 from me.

          Show
          Ted Yu added a comment - TestTableInputFormat passed in 0.92 with 0.92 patch. +1 from me.
          Hide
          Jonathan Hsieh added a comment -

          Commited the 0.92 version to 0.92/0.90 branches. Thanks for review Ted, thanks for patches Jan!

          Show
          Jonathan Hsieh added a comment - Commited the 0.92 version to 0.92/0.90 branches. Thanks for review Ted, thanks for patches Jan!
          Jonathan Hsieh made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Hadoop Flags Reviewed [ 10343 ]
          Fix Version/s 0.90.7 [ 12319481 ]
          Fix Version/s 0.92.2 [ 12319888 ]
          Resolution Fixed [ 1 ]
          Hide
          Hudson added a comment -

          Integrated in HBase-0.92 #415 (See https://builds.apache.org/job/HBase-0.92/415/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341205)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/branches/0.92/CHANGES.txt
          • /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-0.92 #415 (See https://builds.apache.org/job/HBase-0.92/415/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341205) Result = FAILURE jmhsieh : Files : /hbase/branches/0.92/CHANGES.txt /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Hide
          Hudson added a comment -

          Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #13 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/13/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341132)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #13 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/13/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341132) Result = FAILURE jmhsieh : Files : /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/trunk/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Hide
          Hudson added a comment -

          Integrated in HBase-0.94-security #28 (See https://builds.apache.org/job/HBase-0.94-security/28/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341133)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-0.94-security #28 (See https://builds.apache.org/job/HBase-0.94-security/28/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341133) Result = FAILURE jmhsieh : Files : /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Hide
          Hudson added a comment -

          Integrated in HBase-0.92-security #108 (See https://builds.apache.org/job/HBase-0.92-security/108/)
          HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341205)

          Result = FAILURE
          jmhsieh :
          Files :

          • /hbase/branches/0.92/CHANGES.txt
          • /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
          • /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
          • /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          Show
          Hudson added a comment - Integrated in HBase-0.92-security #108 (See https://builds.apache.org/job/HBase-0.92-security/108/ ) HBASE-5757 TableInputFormat should handle as many errors as possible (Jan Lukavsky) (Revision 1341205) Result = FAILURE jmhsieh : Files : /hbase/branches/0.92/CHANGES.txt /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java
          stack made changes -
          Component/s mapred [ 12312137 ]
          Lars Hofhansl made changes -
          Status Resolved [ 5 ] Closed [ 6 ]
          stack made changes -
          Fix Version/s 0.95.0 [ 12324094 ]
          Fix Version/s 0.90.7 [ 12319481 ]
          Fix Version/s 0.92.2 [ 12319888 ]
          Fix Version/s 0.96.0 [ 12320040 ]
          Fix Version/s 0.94.1 [ 12320257 ]
          Lars Hofhansl made changes -
          Fix Version/s 0.94.1 [ 12320257 ]

            People

            • Assignee:
              Jan Lukavsky
              Reporter:
              Jan Lukavsky
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development