Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.1.0, 4.1.0
    • Labels:
      None

      Description

      This is a short-term work around & safe approach. Currently we disable an index when index update failed(still possible bring down the whole cluster). After an index is disable, human needs to be involved to rebuild entire index which maybe not ideal.

      The patch adds the support to automatically rebuild an disable index partially from where it failed. In addition, it removes RS abort during WAL recovery to prevent chain failures because we don't have to.

      To disable automatically rebuilding failed index, add the following configuration into hbase-site.xml:

      <property>
         <name>phoenix.index.failure.handling.rebuild</name>
         <value>false</value>
      </property>
      
      1. Phoenix-1112.patch
        46 kB
        Jeffrey Zhong
      2. Phoenix-1112-v2.patch
        55 kB
        Jeffrey Zhong
      3. Phoenix-1112-v3.patch
        63 kB
        Jeffrey Zhong

        Issue Links

          Activity

          Hide
          jamestaylor James Taylor added a comment -

          Thanks for the patch, Jeffrey Zhong. This is a big improvement. For easier review, it'd be good to spin up a pull request on our github mirror. Jesse Yates - would you please take a look, in particular at the change to how index write failures are handled. rajeshbabu & Jeffrey Zhong - what about local indexes? Should we not do this? Can we do the data & index writes in such a way that they're all-or-nothing (separate JIRA for this would be good)?

          Here's some feedback:

          • I think we can get away with just one additional column in SYSTEM.CATALOG. I don't think we need INDEX_NEED_PARTIALLY_REBUILD. Just use a INDEX_DISABLE_TIMESTAMP value of 0 or null to know that a rebuild is not necessary.
          • Did you run into any issues opening a Phoenix JDBC connection from the server-side in MetaDataRegionObserver? It would add a new dependency on the antlr jar being available on the server-side. Plus, is everything available from a coprocessor that we need (i.e. can it act just like an HBase client)?
          • Is the change from calling recoveryWriter.writeAndKillYourselfOnFailure(indexUpdates) to unconditionally calling recoveryWriter.write(indexUpdates) intentional? Do we change our row-level (table row + index row) guarantees? Jesse Yates - this is the important bit for you to comment on. Should this be config parameter controlled? Maybe just a new no-op failure policy impl that could be configured by default even?
          • Can you please use static constants for config parameter names (define them in QueryServices with the others) and static constants for default values (define them in QueryServicesOptions)?
          • Would you mind filing a subtask to update the secondary index docs?

          Good stuff, Jeffrey Zhong!

          Show
          jamestaylor James Taylor added a comment - Thanks for the patch, Jeffrey Zhong . This is a big improvement. For easier review, it'd be good to spin up a pull request on our github mirror. Jesse Yates - would you please take a look, in particular at the change to how index write failures are handled. rajeshbabu & Jeffrey Zhong - what about local indexes? Should we not do this? Can we do the data & index writes in such a way that they're all-or-nothing (separate JIRA for this would be good)? Here's some feedback: I think we can get away with just one additional column in SYSTEM.CATALOG. I don't think we need INDEX_NEED_PARTIALLY_REBUILD. Just use a INDEX_DISABLE_TIMESTAMP value of 0 or null to know that a rebuild is not necessary. Did you run into any issues opening a Phoenix JDBC connection from the server-side in MetaDataRegionObserver? It would add a new dependency on the antlr jar being available on the server-side. Plus, is everything available from a coprocessor that we need (i.e. can it act just like an HBase client)? Is the change from calling recoveryWriter.writeAndKillYourselfOnFailure(indexUpdates) to unconditionally calling recoveryWriter.write(indexUpdates) intentional? Do we change our row-level (table row + index row) guarantees? Jesse Yates - this is the important bit for you to comment on. Should this be config parameter controlled? Maybe just a new no-op failure policy impl that could be configured by default even? Can you please use static constants for config parameter names (define them in QueryServices with the others) and static constants for default values (define them in QueryServicesOptions)? Would you mind filing a subtask to update the secondary index docs? Good stuff, Jeffrey Zhong !
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          Thanks James Taylor for the reveiws!

          I think we can get away with just one additional column in SYSTEM.CATALOG. I don't think we need INDEX_NEED_PARTIALLY_REBUILD. Just use a INDEX_DISABLE_TIMESTAMP value of 0 or null to know that a rebuild is not necessary.

          You're right. I was thinking a use case to temporally disable rebuild of a disabled index. If we overload the column INDEX_DISABLE_TIMESTAMP, we have to rebuild the whole index because its value will be 0 at that point. What do you think? I can remove INDEX_NEED_PARTIALLY_REBUILD if such use case isn't needed.

          Did you run into any issues opening a Phoenix JDBC connection from the server-side in MetaDataRegionObserver? It would add a new dependency on the antlr jar being available on the server-side. Plus, is everything available from a coprocessor that we need (i.e. can it act just like an HBase client)?

          That's a good point. I can add the dependency on antlar for phone-core jar. Each RS can act as an HBase client.

          Is the change from calling recoveryWriter.writeAndKillYourselfOnFailure(indexUpdates) to unconditionally calling recoveryWriter.write(indexUpdates) intentional?

          Yes, that's by intention. The reason we abort server during normal write path is that we write updates into WAL firstly, then send them to index region server and commit changes on current data region server. Since changes are already in WAL and we have to roll-forward, we can only abort RS to avoid inconsistency read between index & data region.

          While in recovery, no new WAL and data region isn't online(no inconsistency issue because only index region is online) so no need to abort data region server again because data region is already offline anyway.

          During the whole recovered edits replay, Index region at most will see one new change nor at all. This is all right because in normal situation index region can have one change ahead of data region. Therefore when we failed to update index during recovery, we can just let exception bubble up to fail the data region open and the region will be re-assigned somewhere and retry WAL edits replay later.

          Can you please use static constants for config parameter names (define them in QueryServices with the others) and static constants for default values (define them in QueryServicesOptions)?
          Would you mind filing a subtask to update the secondary index docs?

          Sure, let me do that. Thanks.

          Show
          jeffreyz Jeffrey Zhong added a comment - Thanks James Taylor for the reveiws! I think we can get away with just one additional column in SYSTEM.CATALOG. I don't think we need INDEX_NEED_PARTIALLY_REBUILD. Just use a INDEX_DISABLE_TIMESTAMP value of 0 or null to know that a rebuild is not necessary. You're right. I was thinking a use case to temporally disable rebuild of a disabled index. If we overload the column INDEX_DISABLE_TIMESTAMP, we have to rebuild the whole index because its value will be 0 at that point. What do you think? I can remove INDEX_NEED_PARTIALLY_REBUILD if such use case isn't needed. Did you run into any issues opening a Phoenix JDBC connection from the server-side in MetaDataRegionObserver? It would add a new dependency on the antlr jar being available on the server-side. Plus, is everything available from a coprocessor that we need (i.e. can it act just like an HBase client)? That's a good point. I can add the dependency on antlar for phone-core jar. Each RS can act as an HBase client. Is the change from calling recoveryWriter.writeAndKillYourselfOnFailure(indexUpdates) to unconditionally calling recoveryWriter.write(indexUpdates) intentional? Yes, that's by intention. The reason we abort server during normal write path is that we write updates into WAL firstly, then send them to index region server and commit changes on current data region server. Since changes are already in WAL and we have to roll-forward, we can only abort RS to avoid inconsistency read between index & data region. While in recovery, no new WAL and data region isn't online(no inconsistency issue because only index region is online) so no need to abort data region server again because data region is already offline anyway. During the whole recovered edits replay, Index region at most will see one new change nor at all. This is all right because in normal situation index region can have one change ahead of data region. Therefore when we failed to update index during recovery, we can just let exception bubble up to fail the data region open and the region will be re-assigned somewhere and retry WAL edits replay later. Can you please use static constants for config parameter names (define them in QueryServices with the others) and static constants for default values (define them in QueryServicesOptions)? Would you mind filing a subtask to update the secondary index docs? Sure, let me do that. Thanks.
          Hide
          jamestaylor James Taylor added a comment -

          I can remove INDEX_NEED_PARTIALLY_REBUILD if such use case isn't needed

          Yes, let's remove it. We can always add it down-the-road if we need it for a new use case.

          Yes, that's by intention.

          With your patch, will the region server ever kill itself?

          Show
          jamestaylor James Taylor added a comment - I can remove INDEX_NEED_PARTIALLY_REBUILD if such use case isn't needed Yes, let's remove it. We can always add it down-the-road if we need it for a new use case. Yes, that's by intention. With your patch, will the region server ever kill itself?
          Hide
          jamestaylor James Taylor added a comment -

          A little bit more feedback:

          In this function in MetaDataClient, please use the buildIndex() as your entry point instead of using PostIndexDDLCompiler directly. If you need to add an additional argument for the time range, that's fine. The reason is that this is the entry point for rebuilding the index and local indexing specific updates are here.
          + public void buildPartialIndexFromTimeStamp(PTable index, TableRef dataTableRef,
          + long lowerBoundTimeStamp) throws SQLException {

          Also, good catch on upping the minor version in MetaDataProtocol, but I'm going to make that change independent of your check-in.

          Show
          jamestaylor James Taylor added a comment - A little bit more feedback: In this function in MetaDataClient, please use the buildIndex() as your entry point instead of using PostIndexDDLCompiler directly. If you need to add an additional argument for the time range, that's fine. The reason is that this is the entry point for rebuilding the index and local indexing specific updates are here. + public void buildPartialIndexFromTimeStamp(PTable index, TableRef dataTableRef, + long lowerBoundTimeStamp) throws SQLException { Also, good catch on upping the minor version in MetaDataProtocol, but I'm going to make that change independent of your check-in.
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          James Taylor I've incorporated your feedbacks into the v2 patch. In the V2 patch, I put antlr-runtime dependency into Phoenix-Core.jar while we could also doc this and let users to drop antlr-runtime into HBase class path.

          The pull request is at https://github.com/jeffreyz88/phoenix-1/commit/897bf79243f1bbf03b6690add6a17dfd8fe41e2c

          Show
          jeffreyz Jeffrey Zhong added a comment - James Taylor I've incorporated your feedbacks into the v2 patch. In the V2 patch, I put antlr-runtime dependency into Phoenix-Core.jar while we could also doc this and let users to drop antlr-runtime into HBase class path. The pull request is at https://github.com/jeffreyz88/phoenix-1/commit/897bf79243f1bbf03b6690add6a17dfd8fe41e2c
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          With your patch, will the region server ever kill itself?

          Yes, very unlikely. During the normal write path, a region server might still kill itself if it can't update index state to DISABLE. The part I'm changing is in function preWALRestore which will only be trigged during edits recovery code path. Thanks.

          Show
          jeffreyz Jeffrey Zhong added a comment - With your patch, will the region server ever kill itself? Yes, very unlikely. During the normal write path, a region server might still kill itself if it can't update index state to DISABLE. The part I'm changing is in function preWALRestore which will only be trigged during edits recovery code path. Thanks.
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          The v3 patch addressed latest comments from James Taylor.

          Below are answers to some of your questions:

          • if (newState != PIndexState.BUILDING && newState != PIndexState.DISABLE) {
            + if (newState != PIndexState.BUILDING && newState != PIndexState.DISABLE &&
            + newState != PIndexState.INACTIVE) {

          To make sure we can transit from "DISABLE" -> "INACTIVE".

          If we go this route, we should get rid of the minimal-phoenix-client.jar, as this jar will match that one.

          I agree. The issue on removing minimal-phoenix-client.jar is that we need to document this while as you know few people take a look at release doc so we may end up answer same question over & over. In addition, the overhead is trivial just phoenix-core jar has more contents. We can always create a separate JIRA to remove minimal-phoenix-client down the road.

          updates coming into a table with a disable index (to make sure no updates are lost)
          partially updating the index table
          partially updating a local index table

          You asked too much. Tests are never more than enough. I've update the test including the first two scenarios.
          The local index one is easy and just need to change a little bit on existing test case as we did in other tests.

          While there is a bug in disabling local index state when failure so I'll create a separate JIRA on that. Because we can't construct index table name from hbase index table name which is like LOCAL_IDX<DATA_TABLE_FULL_NAME> and got following error.

          2014-07-29 18:24:53,552 WARN  [defaultRpcServer.handler=0,queue=0,port=61926] org.apache.phoenix.index.PhoenixIndexFailurePolicy(136): Attempt to disable index _LOCAL_IDX_T failed with code = TABLE_NOT_FOUND. Will use default failure policy instead.
          

          The last thing I want to mention is that we still throw exception when we successfully disable index. This is wrong because index update is done in postBatchMutate after WAL sync and we have to roll-forward(meaning commit). That's one of reason we have to abort RS.

          Show
          jeffreyz Jeffrey Zhong added a comment - The v3 patch addressed latest comments from James Taylor . Below are answers to some of your questions: if (newState != PIndexState.BUILDING && newState != PIndexState.DISABLE) { + if (newState != PIndexState.BUILDING && newState != PIndexState.DISABLE && + newState != PIndexState.INACTIVE) { To make sure we can transit from "DISABLE" -> "INACTIVE". If we go this route, we should get rid of the minimal-phoenix-client.jar, as this jar will match that one. I agree. The issue on removing minimal-phoenix-client.jar is that we need to document this while as you know few people take a look at release doc so we may end up answer same question over & over. In addition, the overhead is trivial just phoenix-core jar has more contents. We can always create a separate JIRA to remove minimal-phoenix-client down the road. updates coming into a table with a disable index (to make sure no updates are lost) partially updating the index table partially updating a local index table You asked too much . Tests are never more than enough. I've update the test including the first two scenarios. The local index one is easy and just need to change a little bit on existing test case as we did in other tests. While there is a bug in disabling local index state when failure so I'll create a separate JIRA on that. Because we can't construct index table name from hbase index table name which is like LOCAL_IDX <DATA_TABLE_FULL_NAME> and got following error. 2014-07-29 18:24:53,552 WARN [defaultRpcServer.handler=0,queue=0,port=61926] org.apache.phoenix.index.PhoenixIndexFailurePolicy(136): Attempt to disable index _LOCAL_IDX_T failed with code = TABLE_NOT_FOUND. Will use default failure policy instead. The last thing I want to mention is that we still throw exception when we successfully disable index. This is wrong because index update is done in postBatchMutate after WAL sync and we have to roll-forward(meaning commit). That's one of reason we have to abort RS.
          Hide
          jamestaylor James Taylor added a comment - - edited

          Looking good. One minor tweak:

          • Now that you store the lowerTimeStamp on the TableRef, you don't need it on StatementContext
          • Modify this bit of code in BasicQueryPlan to check for a lowerTimeStamp on the context.getCurrentTable() and set the lower bound of the scan (just add/modify ScanUtil.setTimeRange(scan, scn) to take a TimeRange or another argument). Then you can remove both of those if statements in MetaDataClient.buildIndex() where you're doing this.
                public final ResultIterator iterator(final List<SQLCloseable> dependencies) throws SQLException {
                    if (context.getScanRanges() == ScanRanges.NOTHING) {
                        return ResultIterator.EMPTY_ITERATOR;
                    }
                    
                    Scan scan = context.getScan();
                    // Set producer on scan so HBase server does round robin processing
                    //setProducer(scan);
                    // Set the time range on the scan so we don't get back rows newer than when the statement was compiled
                    // The time stamp comes from the server at compile time when the meta data
                    // is resolved.
                    // TODO: include time range in explain plan?
                    PhoenixConnection connection = context.getConnection();
                    Long scn = connection.getSCN();
                    if(scn == null) {
                        scn = context.getCurrentTime();
                        // Add one to server time since max of time range is exclusive
                        // and we need to account of OSs with lower resolution clocks.
                        if(scn < HConstants.LATEST_TIMESTAMP) {
                            scn++;
                        }
                    }
                    ScanUtil.setTimeRange(scan, scn);
            

          The last thing I want to mention is that we still throw exception when we successfully disable index. This is wrong because index update is done in postBatchMutate after WAL sync and we have to roll-forward(meaning commit). That's one of reason we have to abort RS.

          Is this something you've fixed, or is this some follow up work?

          Show
          jamestaylor James Taylor added a comment - - edited Looking good. One minor tweak: Now that you store the lowerTimeStamp on the TableRef, you don't need it on StatementContext Modify this bit of code in BasicQueryPlan to check for a lowerTimeStamp on the context.getCurrentTable() and set the lower bound of the scan (just add/modify ScanUtil.setTimeRange(scan, scn) to take a TimeRange or another argument). Then you can remove both of those if statements in MetaDataClient.buildIndex() where you're doing this. public final ResultIterator iterator( final List<SQLCloseable> dependencies) throws SQLException { if (context.getScanRanges() == ScanRanges.NOTHING) { return ResultIterator.EMPTY_ITERATOR; } Scan scan = context.getScan(); // Set producer on scan so HBase server does round robin processing //setProducer(scan); // Set the time range on the scan so we don't get back rows newer than when the statement was compiled // The time stamp comes from the server at compile time when the meta data // is resolved. // TODO: include time range in explain plan? PhoenixConnection connection = context.getConnection(); Long scn = connection.getSCN(); if (scn == null ) { scn = context.getCurrentTime(); // Add one to server time since max of time range is exclusive // and we need to account of OSs with lower resolution clocks. if (scn < HConstants.LATEST_TIMESTAMP) { scn++; } } ScanUtil.setTimeRange(scan, scn); The last thing I want to mention is that we still throw exception when we successfully disable index. This is wrong because index update is done in postBatchMutate after WAL sync and we have to roll-forward(meaning commit). That's one of reason we have to abort RS. Is this something you've fixed, or is this some follow up work?
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          Is this something you've fixed, or is this some follow up work?

          It's already fixed.

          The minor tweak I'll incorporate upon check in. Thanks for the reviews!

          Show
          jeffreyz Jeffrey Zhong added a comment - Is this something you've fixed, or is this some follow up work? It's already fixed. The minor tweak I'll incorporate upon check in. Thanks for the reviews!
          Hide
          jamestaylor James Taylor added a comment -

          Thanks for the excellent work, Jeffrey Zhong. Please check-in to all branches (assuming all unit test are still passing).

          Show
          jamestaylor James Taylor added a comment - Thanks for the excellent work, Jeffrey Zhong . Please check-in to all branches (assuming all unit test are still passing).
          Hide
          mujtabachohan Mujtaba Chohan added a comment -

          Jeffrey Zhong With your commit https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3d69fa21123d182577a58bbc517d40ea9dc5a2cd and changes to pom.xml, project fails to build for hadoop2 profile.

          mvn clean package -DskipTests -Dhadoop.profile=2
          [INFO] Scanning for projects...
          [ERROR] The build could not read 3 projects -> [Help 1]
          [ERROR]   
          [ERROR]   The project org.apache.phoenix:phoenix-core:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-core/pom.xml) has 1 error
          [ERROR]     'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 433, column 21
          [ERROR]   
          [ERROR]   The project org.apache.phoenix:phoenix-hadoop-compat:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-hadoop-compat/pom.xml) has 1 error
          [ERROR]     'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 84, column 17
          [ERROR]   
          [ERROR]   The project org.apache.phoenix:phoenix-hadoop2-compat:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-hadoop2-compat/pom.xml) has 1 error
          [ERROR]     'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 40, column 17
          
          Show
          mujtabachohan Mujtaba Chohan added a comment - Jeffrey Zhong With your commit https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3d69fa21123d182577a58bbc517d40ea9dc5a2cd and changes to pom.xml, project fails to build for hadoop2 profile. mvn clean package -DskipTests -Dhadoop.profile=2 [INFO] Scanning for projects... [ERROR] The build could not read 3 projects -> [Help 1] [ERROR] [ERROR] The project org.apache.phoenix:phoenix-core:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-core/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 433, column 21 [ERROR] [ERROR] The project org.apache.phoenix:phoenix-hadoop-compat:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-hadoop-compat/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 84, column 17 [ERROR] [ERROR] The project org.apache.phoenix:phoenix-hadoop2-compat:5.0.0-SNAPSHOT (/home/mchohan/Desktop/phoenixm/phoenix-hadoop2-compat/pom.xml) has 1 error [ERROR] 'dependencies.dependency.version' for org.apache.hbase:hbase-common:jar is missing. @ line 40, column 17
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          Thanks Mujtaba Chohan for the check! I've checked in a small fix for it.

          Show
          jeffreyz Jeffrey Zhong added a comment - Thanks Mujtaba Chohan for the check! I've checked in a small fix for it.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #172 (See https://builds.apache.org/job/Phoenix-3.0-hadoop1/172/)
          PHOENIX-1112: Atomically rebuild index partially when index update fails (jeffreyz: rev ddf970a38ba0abeff197c68d77e25b8ea11fdd2e)

          • phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java
          • phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
          • phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
          • phoenix-core/src/main/java/org/apache/phoenix/compile/StatementContext.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
          • phoenix-core/src/build/phoenix-core.xml
          • phoenix-core/pom.xml
          • phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
          • phoenix-core/src/main/java/org/apache/phoenix/execute/BasicQueryPlan.java
          • phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
          • phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
          • phoenix-core/src/main/java/org/apache/phoenix/schema/TableRef.java
          • phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkLoadTool.java
          • phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #172 (See https://builds.apache.org/job/Phoenix-3.0-hadoop1/172/ ) PHOENIX-1112 : Atomically rebuild index partially when index update fails (jeffreyz: rev ddf970a38ba0abeff197c68d77e25b8ea11fdd2e) phoenix-core/src/main/java/org/apache/phoenix/hbase/index/Indexer.java phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java phoenix-core/src/main/java/org/apache/phoenix/compile/StatementContext.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java phoenix-core/src/build/phoenix-core.xml phoenix-core/pom.xml phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java phoenix-core/src/main/java/org/apache/phoenix/execute/BasicQueryPlan.java phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataRegionObserver.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java phoenix-core/src/main/java/org/apache/phoenix/schema/TableRef.java phoenix-core/src/main/java/org/apache/phoenix/mapreduce/CsvBulkLoadTool.java phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
          Hide
          jeffreyz Jeffrey Zhong added a comment -

          The patch has been integrated to branch 3.0, 4.0 & master. Thanks James Taylor for the reviews!

          Show
          jeffreyz Jeffrey Zhong added a comment - The patch has been integrated to branch 3.0, 4.0 & master. Thanks James Taylor for the reviews!
          Hide
          enis Enis Soztutar added a comment -

          Bulk close of all issues that has been resolved in a released version.

          Show
          enis Enis Soztutar added a comment - Bulk close of all issues that has been resolved in a released version.

            People

            • Assignee:
              jeffreyz Jeffrey Zhong
              Reporter:
              jeffreyz Jeffrey Zhong
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development