Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-5047

can't upgrade phoenix from 4.13 to 4.14.1

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Invalid
    • Affects Version/s: 4.14.1
    • Fix Version/s: None
    • Labels:
    • Environment:

      custom build (CB) of 4.13 on top of cdh 5.13.0 , upgrading to CB of 4.14.1 on top of hbase cdh 5.14.2 (

      Description

      The upgrade scenario as following:
      install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to make sure some data is there:

      -- system tables are created on the first connection
      create schema if not exists TS
      create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY (STR, INTCOL))
      create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
      upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3)
      -- made sure there is a data
      select * from TS.TEST
      

      then I shut down everything (queryserver, regionserver, master and zookeeper), install hbase 1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 and start servers. Trying to connect to the server and run:

      select * from TS.TEST
      

      I get:

      2018-11-28 07:53:03,088 ERROR [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] coprocessor.MetaDataEndpointImpl: Add column failed: 
      org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
              at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
              at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2368)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(MetaDataEndpointImpl.java:3242)
              at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16402)
              at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7931)
              at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1969)
              at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1951)
              at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
              at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
              at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
              at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
              at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
      Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
              at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
              at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
              at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2361)
              ... 10 more
      

      In subsequent calls I get same exception with slightly different message that I've got different versions of client and server jars (with ArrayIndexOutOfBoundsException as cause, and only ArrayIndexOutOfBoundsException in server logs), which is not true.

      Serverside exception:

      2018-11-28 08:45:00,611 ERROR [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] coprocessor.MetaDataEndpointImpl: loading system catalog table inside getVersion failed
      java.lang.ArrayIndexOutOfBoundsException: 63
              at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
              at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
              at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339)
              at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3721)
              at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
              at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
              at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
              at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
              at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
              at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
              at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
              at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
              at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
      

      clientside:

      [2018-11-28 10:45:00] [INT08][2006] ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix-[version]-server.jar is put on the classpath of HBase in every region server: org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
      [2018-11-28 10:45:00] 	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3726)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
      [2018-11-28 10:45:00] 	at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
      [2018-11-28 10:45:00] Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
      [2018-11-28 10:45:00] 	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339)
      [2018-11-28 10:45:00] 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3721)
      [2018-11-28 10:45:00] 	... 9 more
      

      Note: phoenix.schema.isNamespaceMappingEnabled is set to true.

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              inekrashevych Ievgen Nekrashevych
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: