I0123 04:40:47.585063 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:48.369436 21486 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:53772) I0123 04:40:48.369904 21486 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vsphere_h5.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 04:40:48.374111 21486 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.bundle pa__arrival_day=1485129600/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1 I0123 04:40:48.423509 21486 HdfsTable.java:348] load block md for bundle file part-00000 I0123 04:40:48.425061 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 04:40:48.426838 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 04:40:48.428063 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 04:40:48.429025 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 04:40:48.431149 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 04:40:48.432801 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 04:40:48.433902 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 04:40:48.434875 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 04:40:48.436138 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 04:40:48.437196 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 04:40:48.439976 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 04:40:48.441071 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 04:40:48.442198 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 04:40:48.443265 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 04:40:48.444316 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 04:40:48.445758 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 04:40:48.447283 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 04:40:48.448370 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 04:40:48.449259 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 04:40:48.450529 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 04:40:48.452235 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 04:40:48.453650 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 04:40:48.455108 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 04:40:48.456434 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 04:40:48.457605 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 04:40:48.458783 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 04:40:48.459956 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 04:40:48.461967 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 04:40:48.463057 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 04:40:48.464046 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 04:40:48.465344 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 04:40:48.466482 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 04:40:48.468366 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 04:40:48.469666 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 04:40:48.470811 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 04:40:48.471940 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 04:40:48.473117 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 04:40:48.475735 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 04:40:48.477021 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 04:40:48.478171 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 04:40:48.479343 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 04:40:48.481933 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 04:40:48.482955 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 04:40:48.484055 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 04:40:48.485175 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 04:40:48.486205 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 04:40:48.489881 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 04:40:48.491125 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 04:40:48.492543 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 04:40:48.493605 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 04:40:48.495501 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 04:40:48.496667 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 04:40:48.497843 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 04:40:48.498842 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 04:40:48.499959 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 04:40:48.502285 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 04:40:48.503665 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 04:40:48.504923 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 04:40:48.505973 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 04:40:48.507874 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 04:40:48.509028 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 04:40:48.509974 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 04:40:48.511153 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 04:40:48.512251 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 04:40:48.513962 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 04:40:48.515430 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 04:40:48.516690 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 04:40:48.518055 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 04:40:48.519085 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 04:40:48.520308 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 04:40:48.525835 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 04:40:48.529974 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 04:40:48.531746 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 04:40:48.535881 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 04:40:48.540761 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 04:40:48.548558 21486 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 04:40:48.550004 21486 HdfsTable.java:441] Loading disk ids for: history.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:40:48.585376 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:48.585471 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:48.913786 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:40:48.914126 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:40:49.586345 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:49.586459 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:50.586840 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:50.586930 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:50.914741 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:40:50.914947 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:40:51.587416 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:51.587519 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:52.587842 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:52.587992 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:52.914695 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:40:52.914950 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:40:53.240084 11226 catalog-server.cc:316] Publishing update: TABLE:history.bundle@45768 I0123 04:40:53.587682 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:53.587923 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:40:53.596460 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45768 I0123 04:40:54.588064 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:54.588189 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:54.914697 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:40:55.175173 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 261.000ms I0123 04:40:55.550540 21486 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45768, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45768, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "bundle", 04: id (i32) = 4153, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 10202831, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 322605, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.25181648135185242, 02: max_size (i64) = 21, 03: num_distinct_values (i64) = 8, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7.9655427932739258, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 4, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 14, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.735977172851562, 02: max_size (i64) = 88, 03: num_distinct_values (i64) = 925201, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 555701, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.148296356201172, 02: max_size (i64) = 91, 03: num_distinct_values (i64) = 1050297, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 17700588, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 19070410, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 8213797, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 5258683, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.111308097839355, 02: max_size (i64) = 42, 03: num_distinct_values (i64) = 111645, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 23289772, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "pa__kafka_partition_offset", [18] = "pa__kafka_partition", [19] = "envelope_ts", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[18153] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 17022 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 0, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000004_999727056_data.0.parq", 02: length (i64) = 9779, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746761391, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 9779, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=0/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 50, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17022, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBZkABsSjAtlbnZl[...](1784)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "50", "rawDataSize" -> "-1", "totalSize" -> "9779", "transient_lastDdlTime" -> "1484746956", }, }, 17023 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409529600, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000009_150531334_data.0.parq", 02: length (i64) = 3417, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746762924, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3417, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1409529600/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 3, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17023, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYGABsSjAtlbnZl[...](1172)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "3", "rawDataSize" -> "-1", "totalSize" -> "3417", "transient_lastDdlTime" -> "1484746954", }, }, 17024 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409702400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = T I0123 04:40:55.588639 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:55.588726 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:40:56.539002 21486 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:53772 took 8s170ms I0123 04:40:56.589210 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:56.589331 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:57.564251 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:40:57.589579 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:57.589675 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:40:58.414716 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 850.000ms I0123 04:40:58.590664 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:58.590790 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:40:59.591078 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:40:59.591188 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:00.415630 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:00.415885 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:00.592087 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:00.592300 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:01.592505 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:01.592615 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:02.416466 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:02.416617 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:02.592461 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:02.592602 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:03.592826 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:03.593014 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:04.417429 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:04.417634 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:04.594624 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:04.594835 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:05.594986 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:05.595098 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:06.418596 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:06.418805 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:06.596257 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:06.596436 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:07.596638 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:07.596726 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:08.419570 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:08.419868 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:08.597062 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:08.597167 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:09.598263 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:09.598443 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:10.420433 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:10.420573 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:10.599269 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:10.599376 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:11.600304 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:11.600522 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:12.223170 30948 webserver.cc:417] Rendering page /jsonmetrics took 1216.23K clock cycles I0123 04:41:12.421267 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:12.421407 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:12.600649 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:12.600750 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:13.600924 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:13.601068 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:14.422101 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:14.422289 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:14.602368 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:14.602533 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:15.603315 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:15.603406 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:16.422981 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:16.423130 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:16.604450 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:16.604552 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:17.605036 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:17.605124 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:18.423868 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:18.424016 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:18.605475 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:18.605573 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:19.502269 11222 CatalogServiceCatalog.java:200] Reloading cache pool names from HDFS I0123 04:41:19.606475 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:19.606577 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:20.424726 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:20.424974 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:20.606643 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:20.606741 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:21.606886 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:21.607029 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:22.425642 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:22.425776 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:22.608475 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:22.608613 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:23.609143 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:23.609319 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:24.426443 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:24.426599 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:24.609729 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:24.609930 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:25.609658 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:25.609952 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:26.427162 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:26.427306 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:26.611039 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:26.611143 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:27.612258 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:27.612440 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:28.427997 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:28.428133 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:28.613446 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:28.613574 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:29.614497 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:29.614588 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:30.428783 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:30.428949 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:30.615030 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:30.615212 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:31.616154 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:31.616343 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:31.900550 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:31.900676 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:31.901207 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle I0123 04:41:31.901347 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:41:31.901484 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 1.000ms I0123 04:41:31.951804 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:31.952986 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 3, 06: create_table_params (struct) = TCreateTableParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle", }, 02: columns (list) = list[21] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [18] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [19] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [20] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, }, 04: file_format (i32) = 4, 05: is_external (bool) = true, 06: if_not_exists (bool) = false, 07: owner (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", 08: row_format (struct) = TTableRowFormat { }, 10: location (string) = "hdfs://ph-hdp-prd-nn01:8020/user/etl/staging/staging__snapshot_staging/parquet/phanalytics.dev/bundle", }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:31.953590 11246 CatalogOpExecutor.java:1367] Creating table staging.phanal_20170123_044035_bundle I0123 04:41:32.000838 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45769, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45769, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle", 04: id (i32) = 19556, }, }, }, } I0123 04:41:32.000931 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 49.000ms I0123 04:41:32.242213 11246 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.20:38750) I0123 04:41:32.242364 11246 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle", }, }, }, } I0123 04:41:32.242645 11246 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 04:41:32.242866 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.20:38750 took 1.000ms I0123 04:41:32.243101 11202 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 04:41:32.243674 11603 TableLoader.java:59] Loading metadata for: staging.phanal_20170123_044035_bundle I0123 04:41:32.268139 11603 Table.java:161] Loading column stats for table: phanal_20170123_044035_bundle I0123 04:41:32.297785 11603 HdfsTable.java:1030] load table from Hive Metastore: staging.phanal_20170123_044035_bundle I0123 04:41:32.304970 11603 MetaStoreUtil.java:129] Fetching 0 partitions for: staging.phanal_20170123_044035_bundle using partition batch size: 1000 I0123 04:41:32.312589 11603 HdfsTable.java:348] load block md for phanal_20170123_044035_bundle file bundle-r-00000-5ea6bc1b-d623-458a-93b6-37e7458e130c.parquet I0123 04:41:32.313853 11603 HdfsTable.java:348] load block md for phanal_20170123_044035_bundle file bundle-r-00001-94670727-8406-48eb-bb62-27b3aa0f247d.parquet I0123 04:41:32.321600 11603 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:716) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 17 more I0123 04:41:32.327925 11603 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:722) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 16 more I0123 04:41:32.329126 11603 HdfsTable.java:441] Loading disk ids for: staging.phanal_20170123_044035_bundle. nodes: 6. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:41:32.428617 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:32.428750 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:32.617483 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:32.617569 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:33.617662 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:33.617753 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:34.121114 11226 catalog-server.cc:316] Publishing update: TABLE:staging.phanal_20170123_044035_bundle@45770 I0123 04:41:34.123299 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45770 I0123 04:41:34.429344 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:34.429601 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:34.618288 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:34.618487 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:35.618940 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:35.619132 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:36.430596 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:36.430752 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:36.620177 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:36.620362 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:36.659677 11246 rpc-trace.cc:184] RPC call: CatalogService.UpdateCatalog(from 10.153.201.20:38750) I0123 04:41:36.660023 11246 catalog-server.cc:90] UpdateCatalog(): request=TUpdateCatalogRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, 03: target_table (string) = "bundle", 04: db_name (string) = "history_staging", 05: created_partitions (set) = set[1] { "pa__arrival_day=1485129600/pa__collector_id=phanalytics.dev/pa__schema_version=1/", }, } W0123 04:41:36.719564 11246 MetaStoreUtils.java:338] Updating partition stats fast for: bundle W0123 04:41:36.723909 11246 MetaStoreUtils.java:341] Updated size to 3197 I0123 04:41:36.788142 11246 CatalogOpExecutor.java:2591] Updating lastDdlTime for table: bundle I0123 04:41:36.869606 11246 HdfsTable.java:1038] incremental update for table: history_staging.bundle I0123 04:41:36.869731 11246 HdfsTable.java:1103] sync table partitions: bundle I0123 04:41:36.892046 11246 MetaStoreUtil.java:129] Fetching 1 partitions for: history_staging.bundle using partition batch size: 1000 I0123 04:41:36.911300 11246 HdfsTable.java:348] load block md for bundle file 304368511bc3e0ba-6038985a00000000_1243388695_data.0.parq I0123 04:41:36.921085 11246 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadPartitionsFromMetastore(HdfsTable.java:1387) at com.cloudera.impala.catalog.HdfsTable.updatePartitionsFromHms(HdfsTable.java:1155) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1044) at com.cloudera.impala.service.CatalogOpExecutor.loadTableMetadata(CatalogOpExecutor.java:477) at com.cloudera.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:2932) at com.cloudera.impala.service.JniCatalog.updateCatalog(JniCatalog.java:253) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 13 more I0123 04:41:36.922070 11246 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:41:36.935639 11246 HdfsTable.java:1416] loading file metadata for 1 partitions I0123 04:41:37.226941 11246 catalog-server.cc:96] UpdateCatalog(): response=TUpdateCatalogResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45771, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45771, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[824] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 04:41:37.264515 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.UpdateCatalog from 10.153.201.20:38750 took 605.000ms I0123 04:41:37.620993 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:37.621165 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:37.665979 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:37.666600 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle", }, 02: if_exists (bool) = false, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:37.667127 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle I0123 04:41:38.299783 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45772, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 05: removed_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45772, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle", }, }, }, } I0123 04:41:38.299878 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 634.000ms I0123 04:41:38.431377 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:38.431594 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:38.585572 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.bundle@45771 I0123 04:41:38.603123 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45772 I0123 04:41:38.603140 11226 catalog-server.cc:335] Publishing deletion: TABLE:staging.phanal_20170123_044035_bundle I0123 04:41:38.621934 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:38.622027 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:39.037768 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:39.037907 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:39.038331 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle I0123 04:41:39.038512 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:41:39.038657 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 0.000ns I0123 04:41:39.623353 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:39.623531 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:40.432397 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:40.433264 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:40.624272 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:40.624461 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:41.624851 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:41.624960 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:42.435643 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:42.438417 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 3.000ms I0123 04:41:42.624577 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:42.624780 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:43.625042 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:43.625149 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:44.439139 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:44.439401 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:44.625669 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:44.625826 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:45.625905 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:45.626102 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:45.695436 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:45.695600 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle_debug_info", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:45.696015 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:45.696131 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:41:45.696187 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 1.000ms I0123 04:41:45.737635 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:45.738054 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 3, 06: create_table_params (struct) = TCreateTableParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle_debug_info", }, 02: columns (list) = list[12] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [1] = TColumn { 01: columnName (string) = "job_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [2] = TColumn { 01: columnName (string) = "job_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [3] = TColumn { 01: columnName (string) = "sequence_file_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [4] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [5] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [6] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [7] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [8] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [9] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [10] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [11] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, }, 04: file_format (i32) = 4, 05: is_external (bool) = true, 06: if_not_exists (bool) = false, 07: owner (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", 08: row_format (struct) = TTableRowFormat { }, 10: location (string) = "hdfs://ph-hdp-prd-nn01:8020/user/etl/staging/staging__snapshot_staging/parquet/phanalytics.dev/bundle_debug_info", }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:45.738781 11246 CatalogOpExecutor.java:1367] Creating table staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:45.789788 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45773, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45773, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle_debug_info", 04: id (i32) = 19558, }, }, }, } I0123 04:41:45.789855 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 53.000ms I0123 04:41:46.036819 11246 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.20:38750) I0123 04:41:46.037158 11246 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle_debug_info", }, }, }, } I0123 04:41:46.037353 11246 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 04:41:46.037423 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.20:38750 took 0.000ns I0123 04:41:46.037591 11201 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 04:41:46.037853 12040 TableLoader.java:59] Loading metadata for: staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:46.060441 12040 Table.java:161] Loading column stats for table: phanal_20170123_044035_bundle_debug_info I0123 04:41:46.085839 12040 HdfsTable.java:1030] load table from Hive Metastore: staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:46.090950 12040 MetaStoreUtil.java:129] Fetching 0 partitions for: staging.phanal_20170123_044035_bundle_debug_info using partition batch size: 1000 I0123 04:41:46.093982 12040 HdfsTable.java:348] load block md for phanal_20170123_044035_bundle_debug_info file bundle_debug_info-r-00000-d6466eca-be89-4955-a673-90123dba3860.parquet I0123 04:41:46.095847 12040 HdfsTable.java:348] load block md for phanal_20170123_044035_bundle_debug_info file bundle_debug_info-r-00001-8ae4986d-0ba2-4f0d-84cf-d82116c5fed1.parquet I0123 04:41:46.100164 12040 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:716) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 17 more I0123 04:41:46.104951 12040 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:722) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 16 more I0123 04:41:46.105939 12040 HdfsTable.java:441] Loading disk ids for: staging.phanal_20170123_044035_bundle_debug_info. nodes: 5. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:41:46.440124 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:46.440328 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:46.476122 11226 catalog-server.cc:316] Publishing update: TABLE:staging.phanal_20170123_044035_bundle_debug_info@45774 I0123 04:41:46.478312 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45774 I0123 04:41:46.627131 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:46.627240 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:47.628078 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:47.628183 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:48.441113 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:48.441413 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:48.628566 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:48.628742 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:49.182054 11246 rpc-trace.cc:184] RPC call: CatalogService.UpdateCatalog(from 10.153.201.20:38750) I0123 04:41:49.182209 11246 catalog-server.cc:90] UpdateCatalog(): request=TUpdateCatalogRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, 03: target_table (string) = "bundle_debug_info", 04: db_name (string) = "history_staging", 05: created_partitions (set) = set[1] { "pa__arrival_day=1485129600/pa__collector_id=phanalytics.dev/pa__schema_version=1/", }, } W0123 04:41:49.241971 11246 MetaStoreUtils.java:338] Updating partition stats fast for: bundle_debug_info W0123 04:41:49.245806 11246 MetaStoreUtils.java:341] Updated size to 1805 I0123 04:41:49.289463 11246 CatalogOpExecutor.java:2591] Updating lastDdlTime for table: bundle_debug_info I0123 04:41:49.448922 11246 HdfsTable.java:1038] incremental update for table: history_staging.bundle_debug_info I0123 04:41:49.448999 11246 HdfsTable.java:1103] sync table partitions: bundle_debug_info I0123 04:41:49.470598 11246 MetaStoreUtil.java:129] Fetching 1 partitions for: history_staging.bundle_debug_info using partition batch size: 1000 I0123 04:41:49.488308 11246 HdfsTable.java:348] load block md for bundle_debug_info file a545b57680454f27-535175f900000000_683454875_data.0.parq I0123 04:41:49.492763 11246 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadPartitionsFromMetastore(HdfsTable.java:1387) at com.cloudera.impala.catalog.HdfsTable.updatePartitionsFromHms(HdfsTable.java:1155) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1044) at com.cloudera.impala.service.CatalogOpExecutor.loadTableMetadata(CatalogOpExecutor.java:477) at com.cloudera.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:2932) at com.cloudera.impala.service.JniCatalog.updateCatalog(JniCatalog.java:253) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 13 more I0123 04:41:49.494225 11246 HdfsTable.java:441] Loading disk ids for: history_staging.bundle_debug_info. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:41:49.504354 11246 HdfsTable.java:1416] loading file metadata for 1 partitions I0123 04:41:49.628404 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:49.628507 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:49.648020 11246 catalog-server.cc:96] UpdateCatalog(): response=TUpdateCatalogResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45775, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45775, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle_debug_info", 04: id (i32) = 4569, 05: access_level (i32) = 1, 06: columns (list) = list[9] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "job_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "job_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "sequence_file_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 41, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 39, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle_debug_info", 02: colNames (list) = list[12] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "job_name", [5] = "job_id", [6] = "sequence_file_name", [7] = "pa__is_external", [8] = "pa__collector_instance_id", [9] = "pa__bundle__fk", [10] = "pa__arrival_ts", [11] = "pa__processed_ts", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[698] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 44475 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1471478400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "phanalytics.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "374498fc91252a64-77bae7e59dc5faa6_1684725132_data.0.parq", 02: length (i64) = 1699, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481890665034, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 1699, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 0, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1471478400/pa__collector_id=phanalytics.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 44475, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "1699", "transient_lastDdlTime" -> "1481890665", }, }, 44476 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "9c465ba06aefc562-4f666f0dbd8b09ad_750677658_data.0.parq", 02: length (i64) = 2261, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891743786, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2261, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 44476, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "2261", "transient_lastDdlTime" -> "1481891743", }, }, 44477 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "ngc.2016", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "ad43ed494fcd495c-6a0a5a9430bbddba_1369495711_data.0.parq", 02: length (i64) = 3465, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481895434855, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3465, 03: replica_host_idxs (list) = list[3] { [0] = 6, [1] = 2, [2] = 7, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=ngc.2016/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 44477, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3465", "transient_lastDdlTime" -> "1481895435", }, }, 44478 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "phanalytics.dev", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "ec4dc75b88b69a21-f6e24072b2cf9caa_179450710_data.0.parq", 02: length (i64) = 2167, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891126886, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2167, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, I0123 04:41:49.664599 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.UpdateCatalog from 10.153.201.20:38750 took 482.000ms I0123 04:41:50.009517 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:50.010062 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle_debug_info", }, 02: if_exists (bool) = false, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:50.010879 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:50.442137 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:50.442323 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:50.494742 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.bundle_debug_info@45775 I0123 04:41:50.509228 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45775 I0123 04:41:50.629530 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:50.629668 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:50.701802 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45776, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 05: removed_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45776, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_bundle_debug_info", }, }, }, } I0123 04:41:50.702339 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 693.000ms I0123 04:41:50.938397 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:50.938503 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_bundle_debug_info", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:50.939028 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:50.939183 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:41:50.939353 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 1.000ms I0123 04:41:51.630311 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:51.630452 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:52.443131 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:52.444165 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:41:52.505623 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45776 I0123 04:41:52.505713 11226 catalog-server.cc:335] Publishing deletion: TABLE:staging.phanal_20170123_044035_bundle_debug_info I0123 04:41:52.631363 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:52.631466 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:53.631525 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:53.631680 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:54.446270 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:54.448209 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 2.000ms I0123 04:41:54.631640 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:54.631901 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:41:55.633335 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:55.633455 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:56.448927 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:56.449084 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:56.633862 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:56.633998 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:57.561974 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:57.562088 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:57.562518 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:41:57.562638 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:41:57.562789 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 1.000ms I0123 04:41:57.599571 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:41:57.600136 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 3, 06: create_table_params (struct) = TCreateTableParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, 02: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [1] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [2] = TColumn { 01: columnName (string) = "type", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [3] = TColumn { 01: columnName (string) = "query_duration_millis", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [4] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [5] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [6] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [7] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [8] = TColumn { 01: columnName (string) = "user", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [9] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [10] = TColumn { 01: columnName (string) = "query_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [11] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [12] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [13] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [14] = TColumn { 01: columnName (string) = "bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [15] = TColumn { 01: columnName (string) = "collector_id_type", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [16] = TColumn { 01: columnName (string) = "download_ts_bundle", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [17] = TColumn { 01: columnName (string) = "processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, }, 04: file_format (i32) = 4, 05: is_external (bool) = true, 06: if_not_exists (bool) = false, 07: owner (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", 08: row_format (struct) = TTableRowFormat { }, 10: location (string) = "hdfs://ph-hdp-prd-nn01:8020/user/etl/staging/staging__snapshot_staging/parquet/phanalytics.dev/pa__hanging_queries_stats", }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:41:57.600617 11246 CatalogOpExecutor.java:1367] Creating table staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:41:57.633540 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:57.633621 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:57.645650 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45777, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45777, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", 04: id (i32) = 19560, }, }, }, } I0123 04:41:57.645735 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 47.000ms I0123 04:41:57.907786 11246 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.20:38750) I0123 04:41:57.908198 11246 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, }, }, } I0123 04:41:57.908403 11246 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 04:41:57.908552 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.20:38750 took 0.000ns I0123 04:41:57.908752 11205 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 04:41:57.909124 11271 TableLoader.java:59] Loading metadata for: staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:41:57.932968 11271 Table.java:161] Loading column stats for table: phanal_20170123_044035_pa__hanging_queries_stats I0123 04:41:57.959925 11271 HdfsTable.java:1030] load table from Hive Metastore: staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:41:57.965329 11271 MetaStoreUtil.java:129] Fetching 0 partitions for: staging.phanal_20170123_044035_pa__hanging_queries_stats using partition batch size: 1000 I0123 04:41:57.968235 11271 HdfsTable.java:348] load block md for phanal_20170123_044035_pa__hanging_queries_stats file pa__hanging_queries_stats-r-00000-25216854-8cf7-498b-bfe8-b0d1f6b89b40.parquet I0123 04:41:57.969873 11271 HdfsTable.java:348] load block md for phanal_20170123_044035_pa__hanging_queries_stats file pa__hanging_queries_stats-r-00001-f7dd55ce-fc42-40fe-b2dc-245c59e6189e.parquet I0123 04:41:57.973939 11271 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:716) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 17 more I0123 04:41:57.978209 11271 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:722) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 16 more I0123 04:41:57.979262 11271 HdfsTable.java:441] Loading disk ids for: staging.phanal_20170123_044035_pa__hanging_queries_stats. nodes: 6. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:41:58.449762 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:41:58.449959 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:41:58.485361 11226 catalog-server.cc:316] Publishing update: TABLE:staging.phanal_20170123_044035_pa__hanging_queries_stats@45778 I0123 04:41:58.487620 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45778 I0123 04:41:58.633424 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:58.633615 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:41:59.633507 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:41:59.633615 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:00.449717 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:00.450017 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:00.634094 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:00.634284 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:01.226275 11246 rpc-trace.cc:184] RPC call: CatalogService.UpdateCatalog(from 10.153.201.20:38750) I0123 04:42:01.226385 11246 catalog-server.cc:90] UpdateCatalog(): request=TUpdateCatalogRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, 03: target_table (string) = "pa__hanging_queries_stats", 04: db_name (string) = "history_staging", 05: created_partitions (set) = set[1] { "pa__arrival_day=1485129600/pa__collector_id=phanalytics.dev/pa__schema_version=1/", }, } W0123 04:42:01.286430 11246 MetaStoreUtils.java:338] Updating partition stats fast for: pa__hanging_queries_stats W0123 04:42:01.289927 11246 MetaStoreUtils.java:341] Updated size to 2082 I0123 04:42:01.350589 11246 CatalogOpExecutor.java:2591] Updating lastDdlTime for table: pa__hanging_queries_stats I0123 04:42:01.431572 11246 HdfsTable.java:1038] incremental update for table: history_staging.pa__hanging_queries_stats I0123 04:42:01.431674 11246 HdfsTable.java:1103] sync table partitions: pa__hanging_queries_stats I0123 04:42:01.437522 11246 MetaStoreUtil.java:129] Fetching 1 partitions for: history_staging.pa__hanging_queries_stats using partition batch size: 1000 I0123 04:42:01.455099 11246 HdfsTable.java:348] load block md for pa__hanging_queries_stats file 124076b68a9fc9f8-80fd32c500000000_529293034_data.0.parq I0123 04:42:01.459259 11246 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadPartitionsFromMetastore(HdfsTable.java:1387) at com.cloudera.impala.catalog.HdfsTable.updatePartitionsFromHms(HdfsTable.java:1155) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1044) at com.cloudera.impala.service.CatalogOpExecutor.loadTableMetadata(CatalogOpExecutor.java:477) at com.cloudera.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:2932) at com.cloudera.impala.service.JniCatalog.updateCatalog(JniCatalog.java:253) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 13 more I0123 04:42:01.460782 11246 HdfsTable.java:441] Loading disk ids for: history_staging.pa__hanging_queries_stats. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:42:01.470193 11246 HdfsTable.java:1416] loading file metadata for 1 partitions I0123 04:42:01.481375 11246 catalog-server.cc:96] UpdateCatalog(): response=TUpdateCatalogResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45779, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45779, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "pa__hanging_queries_stats", 04: id (i32) = 5923, 05: access_level (i32) = 1, 06: columns (list) = list[11] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "type", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "query_duration_millis", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "user", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "query_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 36, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/pa__hanging_queries_stats", 02: colNames (list) = list[14] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "pa__processed_ts", [5] = "type", [6] = "query_duration_millis", [7] = "pa__arrival_ts", [8] = "_v", [9] = "pa__is_external", [10] = "pa__collector_instance_id", [11] = "user", [12] = "pa__bundle__fk", [13] = "query_id", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[37] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 110091 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "phanalytics.dev", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "134730fadbf3df2f-ae05bc55aa453b91_477675802_data.0.parq", 02: length (i64) = 2565, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891143810, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2565, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 2, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=phanalytics.dev/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 110091, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "2565", "transient_lastDdlTime" -> "1481891143", }, }, 110092 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481846400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "phanalytics.dev", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[3] { [0] = THdfsFileDesc { 01: file_name (string) = "d0427d660723ed7e-fd6c46c539a3fb9d_1999844942_data.0.parq", 02: length (i64) = 2741, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481901012854, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2741, 03: replica_host_idxs (list) = list[3] { [0] = 6, [1] = 7, [2] = 8, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, [1] = THdfsFileDesc { 01: file_name (string) = "f94812cf8acb970f-63ddb76700000000_2129833325_data.0.parq", 02: length (i64) = 1927, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482175101409, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 1927, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 9, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 0, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, [2] = THdfsFileDesc { 01: file_name (string) = "714cdd4836eaa71d-edb33886a2f1238f_1297602376_data.0.parq", 02: length (i64) = 1934, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481892997094, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 1934, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481846400/pa__collector_id=phanalytics.dev/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 110092, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "1934", "transient_lastDdlTime" -> "1481892997", }, }, 110093 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481932800, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "phanalytics.dev", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "f94812cf8acb970f-63ddb76700000000_519190409_data.0.parq", 02: length (i64) = 2276, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482175101436, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2276, 03: replica_host_idxs (list) = list[3] { [0] = 8, [1] = 2, [2] = 10, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481932800/pa__collector_id=phanalytics.dev/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 110093, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "2276", "transient_lastDdlTim I0123 04:42:01.481894 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.UpdateCatalog from 10.153.201.20:38750 took 256.000ms I0123 04:42:01.635288 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:01.635483 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:01.803781 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:42:01.803887 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, 02: if_exists (bool) = false, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:42:01.804348 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:42:02.450736 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45780, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 05: removed_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45780, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, }, }, } I0123 04:42:02.450873 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:02.450896 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 647.000ms I0123 04:42:02.451035 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:02.486515 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.pa__hanging_queries_stats@45779 I0123 04:42:02.495934 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45780 I0123 04:42:02.495949 11226 catalog-server.cc:335] Publishing deletion: TABLE:staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:42:02.635905 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:02.636078 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:02.691342 11246 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.20:38750) I0123 04:42:02.691555 11246 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "phanal_20170123_044035_pa__hanging_queries_stats", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-staging@PHONEHOME.VMWARE.COM", }, } I0123 04:42:02.692184 11246 CatalogOpExecutor.java:1156] Dropping table/view staging.phanal_20170123_044035_pa__hanging_queries_stats I0123 04:42:02.692627 11246 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 04:42:02.692818 11246 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.20:38750 took 2.000ms I0123 04:42:03.636584 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:03.636693 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:04.451943 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:04.452159 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:04.636651 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:04.636837 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:05.241097 10057 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:57413) I0123 04:42:05.241689 10057 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "nova.poc", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 04:42:05.242408 10057 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.bundle pa__arrival_day=1485129600/pa__collector_id=nova.poc/pa__schema_version=1 I0123 04:42:05.301286 10057 HdfsTable.java:348] load block md for bundle file part-00000 I0123 04:42:05.302817 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 04:42:05.304049 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 04:42:05.305542 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 04:42:05.306771 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 04:42:05.308424 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 04:42:05.309880 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 04:42:05.310986 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 04:42:05.312388 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 04:42:05.315141 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 04:42:05.316404 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 04:42:05.318217 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 04:42:05.320984 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 04:42:05.326208 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 04:42:05.328217 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 04:42:05.329401 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 04:42:05.330524 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 04:42:05.331948 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 04:42:05.333025 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 04:42:05.335057 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 04:42:05.336125 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 04:42:05.337604 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 04:42:05.338838 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 04:42:05.339910 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 04:42:05.341817 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 04:42:05.343027 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 04:42:05.344179 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 04:42:05.346285 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 04:42:05.348187 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 04:42:05.349262 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 04:42:05.350261 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 04:42:05.351320 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 04:42:05.352555 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 04:42:05.353822 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 04:42:05.354941 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 04:42:05.355895 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 04:42:05.356868 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 04:42:05.358335 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 04:42:05.359323 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 04:42:05.361275 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 04:42:05.362360 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 04:42:05.363356 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 04:42:05.364475 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 04:42:05.365522 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 04:42:05.367636 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 04:42:05.368860 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 04:42:05.370206 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 04:42:05.371275 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 04:42:05.372391 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 04:42:05.373781 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 04:42:05.374968 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 04:42:05.376092 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_144 I0123 04:42:05.377197 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_145 I0123 04:42:05.378165 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_146 I0123 04:42:05.379220 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_147 I0123 04:42:05.380947 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_148 I0123 04:42:05.382035 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_149 I0123 04:42:05.386234 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 04:42:05.387665 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_150 I0123 04:42:05.388866 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_151 I0123 04:42:05.389907 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_152 I0123 04:42:05.390959 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_153 I0123 04:42:05.391973 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_154 I0123 04:42:05.393774 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_155 I0123 04:42:05.394865 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_156 I0123 04:42:05.395926 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_157 I0123 04:42:05.397210 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_158 I0123 04:42:05.398295 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_159 I0123 04:42:05.399895 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 04:42:05.401132 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_160 I0123 04:42:05.402267 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_161 I0123 04:42:05.403455 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_162 I0123 04:42:05.404677 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_163 I0123 04:42:05.405908 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_164 I0123 04:42:05.407742 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_165 I0123 04:42:05.408830 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_166 I0123 04:42:05.409724 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_167 I0123 04:42:05.410706 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_168 I0123 04:42:05.411851 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_169 I0123 04:42:05.412817 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 04:42:05.413697 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_170 I0123 04:42:05.414932 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_171 I0123 04:42:05.416018 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_172 I0123 04:42:05.417060 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_173 I0123 04:42:05.418162 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_174 I0123 04:42:05.419129 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_175 I0123 04:42:05.420804 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_176 I0123 04:42:05.421857 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_177 I0123 04:42:05.422793 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_178 I0123 04:42:05.424026 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_179 I0123 04:42:05.425086 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 04:42:05.430390 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_180 I0123 04:42:05.434439 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_181 I0123 04:42:05.435508 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_182 I0123 04:42:05.438241 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_183 I0123 04:42:05.439182 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_184 I0123 04:42:05.440083 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_185 I0123 04:42:05.442409 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_186 I0123 04:42:05.443549 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_187 I0123 04:42:05.444567 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_188 I0123 04:42:05.445969 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_189 I0123 04:42:05.447026 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 04:42:05.450289 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_190 I0123 04:42:05.451386 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_191 I0123 04:42:05.452837 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_192 I0123 04:42:05.453780 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_193 I0123 04:42:05.454740 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_194 I0123 04:42:05.455801 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_195 I0123 04:42:05.457175 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_196 I0123 04:42:05.458220 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_197 I0123 04:42:05.460276 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_198 I0123 04:42:05.461511 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_199 I0123 04:42:05.462754 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 04:42:05.465241 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 04:42:05.466369 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_200 I0123 04:42:05.467563 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_201 I0123 04:42:05.468683 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_202 I0123 04:42:05.469801 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_203 I0123 04:42:05.470731 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_204 I0123 04:42:05.471817 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_205 I0123 04:42:05.473075 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_206 I0123 04:42:05.474282 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_207 I0123 04:42:05.475252 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_208 I0123 04:42:05.476179 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_209 I0123 04:42:05.477494 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 04:42:05.478739 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_210 I0123 04:42:05.480756 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_211 I0123 04:42:05.481833 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_212 I0123 04:42:05.482848 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_213 I0123 04:42:05.483778 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_214 I0123 04:42:05.484676 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_215 I0123 04:42:05.485563 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_216 I0123 04:42:05.487287 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_217 I0123 04:42:05.488528 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_218 I0123 04:42:05.489524 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_219 I0123 04:42:05.490471 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 04:42:05.491698 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_220 I0123 04:42:05.492843 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_221 I0123 04:42:05.494243 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_222 I0123 04:42:05.495172 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_223 I0123 04:42:05.496151 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_224 I0123 04:42:05.497120 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_225 I0123 04:42:05.498129 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_226 I0123 04:42:05.499126 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_227 I0123 04:42:05.500794 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_228 I0123 04:42:05.501750 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_229 I0123 04:42:05.502784 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 04:42:05.503954 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_230 I0123 04:42:05.505066 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_231 I0123 04:42:05.506162 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_232 I0123 04:42:05.507330 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_233 I0123 04:42:05.508343 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_234 I0123 04:42:05.509369 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_235 I0123 04:42:05.510669 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_236 I0123 04:42:05.511750 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_237 I0123 04:42:05.512657 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_238 I0123 04:42:05.514492 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_239 I0123 04:42:05.515581 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 04:42:05.516476 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_240 I0123 04:42:05.517338 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_241 I0123 04:42:05.518421 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_242 I0123 04:42:05.519309 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_243 I0123 04:42:05.521292 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_244 I0123 04:42:05.522364 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_245 I0123 04:42:05.523331 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_246 I0123 04:42:05.524590 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_247 I0123 04:42:05.525646 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_248 I0123 04:42:05.526643 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_249 I0123 04:42:05.527632 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 04:42:05.528832 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_250 I0123 04:42:05.529837 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_251 I0123 04:42:05.530973 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_252 I0123 04:42:05.531941 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_253 I0123 04:42:05.533789 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_254 I0123 04:42:05.538287 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_255 I0123 04:42:05.539198 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_256 I0123 04:42:05.542347 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_257 I0123 04:42:05.544786 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_258 I0123 04:42:05.545964 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_259 I0123 04:42:05.547710 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 04:42:05.548940 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_260 I0123 04:42:05.550057 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_261 I0123 04:42:05.551002 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_262 I0123 04:42:05.551883 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_263 I0123 04:42:05.552780 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_264 I0123 04:42:05.553676 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_265 I0123 04:42:05.558275 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_266 I0123 04:42:05.559460 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_267 I0123 04:42:05.560494 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_268 I0123 04:42:05.561483 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_269 I0123 04:42:05.562583 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 04:42:05.566308 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_270 I0123 04:42:05.570286 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_271 I0123 04:42:05.573971 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_272 I0123 04:42:05.578287 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_273 I0123 04:42:05.579849 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_274 I0123 04:42:05.582312 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_275 I0123 04:42:05.583281 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_276 I0123 04:42:05.584365 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_277 I0123 04:42:05.585450 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_278 I0123 04:42:05.586362 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_279 I0123 04:42:05.588551 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 04:42:05.589541 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_280 I0123 04:42:05.590477 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_281 I0123 04:42:05.592315 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_282 I0123 04:42:05.593308 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_283 I0123 04:42:05.595140 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_284 I0123 04:42:05.596122 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_285 I0123 04:42:05.597535 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_286 I0123 04:42:05.598480 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_287 I0123 04:42:05.599689 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_288 I0123 04:42:05.601655 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_289 I0123 04:42:05.602730 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 04:42:05.603756 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_290 I0123 04:42:05.604717 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_291 I0123 04:42:05.605615 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_292 I0123 04:42:05.606636 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_293 I0123 04:42:05.608216 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_294 I0123 04:42:05.609275 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_295 I0123 04:42:05.610707 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_296 I0123 04:42:05.611793 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_297 I0123 04:42:05.612820 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_298 I0123 04:42:05.613791 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_299 I0123 04:42:05.614815 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 04:42:05.616286 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 04:42:05.617363 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_300 I0123 04:42:05.618410 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_301 I0123 04:42:05.619314 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_302 I0123 04:42:05.621381 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_303 I0123 04:42:05.622303 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_304 I0123 04:42:05.623397 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_305 I0123 04:42:05.624372 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_306 I0123 04:42:05.625427 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_307 I0123 04:42:05.627962 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_308 I0123 04:42:05.629034 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_309 I0123 04:42:05.630172 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 04:42:05.631304 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_310 I0123 04:42:05.632311 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_311 I0123 04:42:05.633971 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_312 I0123 04:42:05.635174 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_313 I0123 04:42:05.636256 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_314 I0123 04:42:05.636752 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:05.636885 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:05.637274 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_315 I0123 04:42:05.638218 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_316 I0123 04:42:05.639261 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_317 I0123 04:42:05.641147 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_318 I0123 04:42:05.642369 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_319 I0123 04:42:05.643538 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 04:42:05.644647 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_320 I0123 04:42:05.645668 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_321 I0123 04:42:05.647723 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_322 I0123 04:42:05.648798 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_323 I0123 04:42:05.649802 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_324 I0123 04:42:05.650810 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_325 I0123 04:42:05.651921 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_326 I0123 04:42:05.654708 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_327 I0123 04:42:05.655720 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_328 I0123 04:42:05.656746 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_329 I0123 04:42:05.657877 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 04:42:05.658829 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_330 I0123 04:42:05.660925 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_331 I0123 04:42:05.661983 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_332 I0123 04:42:05.663336 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_333 I0123 04:42:05.664329 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_334 I0123 04:42:05.665447 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_335 I0123 04:42:05.667639 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_336 I0123 04:42:05.668630 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_337 I0123 04:42:05.669680 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_338 I0123 04:42:05.670698 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_339 I0123 04:42:05.671762 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 04:42:05.673032 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_340 I0123 04:42:05.674268 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_341 I0123 04:42:05.675314 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_342 I0123 04:42:05.676801 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_343 I0123 04:42:05.678035 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_344 I0123 04:42:05.679131 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_345 I0123 04:42:05.680943 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_346 I0123 04:42:05.682665 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_347 I0123 04:42:05.683764 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_348 I0123 04:42:05.685236 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_349 I0123 04:42:05.686328 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 04:42:05.687724 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_350 I0123 04:42:05.689013 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_351 I0123 04:42:05.690245 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_352 I0123 04:42:05.692541 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_353 I0123 04:42:05.693816 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_354 I0123 04:42:05.695236 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_355 I0123 04:42:05.697244 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_356 I0123 04:42:05.701916 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_357 I0123 04:42:05.703399 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_358 I0123 04:42:05.704723 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_359 I0123 04:42:05.706279 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 04:42:05.709975 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_360 I0123 04:42:05.711073 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_361 I0123 04:42:05.712388 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_362 I0123 04:42:05.713850 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_363 I0123 04:42:05.716301 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_364 I0123 04:42:05.717679 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_365 I0123 04:42:05.718902 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_366 I0123 04:42:05.720608 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_367 I0123 04:42:05.721580 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_368 I0123 04:42:05.722829 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_369 I0123 04:42:05.724064 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 04:42:05.725554 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_370 I0123 04:42:05.726560 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_371 I0123 04:42:05.727562 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_372 I0123 04:42:05.728646 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_373 I0123 04:42:05.729677 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_374 I0123 04:42:05.730744 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_375 I0123 04:42:05.731910 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_376 I0123 04:42:05.733310 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_377 I0123 04:42:05.734553 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_378 I0123 04:42:05.735833 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_379 I0123 04:42:05.737128 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 04:42:05.738389 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_380 I0123 04:42:05.740176 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 04:42:05.741636 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 04:42:05.742686 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 04:42:05.743839 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 04:42:05.744828 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 04:42:05.746508 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 04:42:05.747798 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 04:42:05.749037 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 04:42:05.750358 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 04:42:05.751520 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 04:42:05.752923 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 04:42:05.754066 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 04:42:05.755071 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 04:42:05.756399 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 04:42:05.757472 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 04:42:05.758486 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 04:42:05.760277 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 04:42:05.761374 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 04:42:05.762291 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 04:42:05.763329 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 04:42:05.764282 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 04:42:05.765529 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 04:42:05.766566 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 04:42:05.767608 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 04:42:05.768682 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 04:42:05.769592 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 04:42:05.770584 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 04:42:05.771652 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 04:42:05.773133 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 04:42:05.774184 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 04:42:05.775233 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 04:42:05.776335 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 04:42:05.777305 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 04:42:05.779925 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 04:42:05.781252 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 04:42:05.782322 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 04:42:05.783283 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 04:42:05.784266 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 04:42:05.786280 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 04:42:05.787261 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 04:42:05.788339 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 04:42:05.789355 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 04:42:05.790292 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 04:42:05.792735 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 04:42:05.793745 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 04:42:05.794800 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 04:42:05.795882 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 04:42:05.797118 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 04:42:05.799522 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 04:42:05.800688 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 04:42:05.801717 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 04:42:05.802731 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 04:42:05.803750 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 04:42:05.805928 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 04:42:05.806962 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 04:42:05.807981 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 04:42:05.808962 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 04:42:05.810019 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 04:42:05.811071 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 04:42:05.812731 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 04:42:05.813660 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 04:42:05.814743 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 04:42:05.815696 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 04:42:05.816731 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 04:42:05.822299 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 04:42:05.826409 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 04:42:05.830368 10057 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 04:42:05.838945 10057 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 04:42:05.840953 10057 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 04:42:06.154088 10057 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45781, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45781, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[824] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 04:42:06.170078 10057 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:57413 took 929.000ms I0123 04:42:06.453125 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:06.453382 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:06.526707 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.bundle@45781 I0123 04:42:06.543802 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45781 I0123 04:42:06.636714 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:06.636859 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:07.636857 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:07.636998 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:08.454103 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:08.455126 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:42:08.638212 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:08.638316 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:09.639139 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:09.639328 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:10.458529 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:10.461405 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 3.000ms I0123 04:42:10.639473 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:10.639709 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:11.640259 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:11.640424 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:12.241533 31200 webserver.cc:417] Rendering page /jsonmetrics took 1327.24K clock cycles I0123 04:42:12.462071 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:12.462287 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:12.641266 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:12.641361 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:13.641626 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:13.641778 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:14.463006 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:14.463227 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:14.641875 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:14.641984 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:15.642997 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:15.643084 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:16.463804 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:16.464012 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:16.643682 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:16.643892 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:17.644160 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:17.644290 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:18.464778 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:18.465102 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:18.645117 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:18.645251 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:19.502141 11222 CatalogServiceCatalog.java:200] Reloading cache pool names from HDFS I0123 04:42:19.645650 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:19.645786 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:20.465853 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:20.466092 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:20.645725 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:20.645908 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:21.646697 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:21.646837 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:21.668195 11071 authentication.cc:497] Registering impala/ph-hdp-prd-nn02@PHONEHOME.VMWARE.COM, keytab file /var/run/cloudera-scm-agent/process/12327-impala-CATALOGSERVER/impala.keytab I0123 04:42:22.534198 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:22.534929 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:42:22.647274 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:22.647480 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:23.648044 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:23.648172 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:24.535555 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:24.535712 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:24.649149 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:24.649309 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:25.649598 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:25.649802 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:26.536547 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:26.536772 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:42:26.649616 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:26.649737 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:27.649664 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:27.649830 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:28.537431 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:28.537588 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:28.649716 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:28.649916 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:29.650070 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:29.650193 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:30.538381 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:30.538610 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:30.650656 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:30.650799 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:31.651453 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:31.651654 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:32.539293 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:32.539592 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:32.651724 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:32.651980 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:33.653054 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:33.653146 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:34.540185 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:34.540426 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:34.654008 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:34.654103 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:35.654994 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:35.655155 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:35.909744 16373 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.16:60379) I0123 04:42:35.910470 16373 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 3, 06: create_table_params (struct) = TCreateTableParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "default", 02: table_name (string) = "test_table", }, 02: columns (list) = list[1] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, }, }, 04: file_format (i32) = 0, 05: is_external (bool) = false, 06: if_not_exists (bool) = false, 07: owner (string) = "phanalytics-test@PHONEHOME.VMWARE.COM", 08: row_format (struct) = TTableRowFormat { }, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics-test@PHONEHOME.VMWARE.COM", }, } I0123 04:42:35.911664 16373 CatalogOpExecutor.java:1367] Creating table default.test_table I0123 04:42:35.960863 16373 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 45782, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 45782, 05: table (struct) = TTable { 01: db_name (string) = "default", 02: tbl_name (string) = "test_table", 04: id (i32) = 19562, }, }, }, } I0123 04:42:35.961019 16373 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.16:60379 took 51.000ms I0123 04:42:36.541013 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:36.541162 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:36.633293 11226 catalog-server.cc:316] Publishing update: TABLE:default.test_table@45782 I0123 04:42:36.655890 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:36.656020 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:36.663892 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@45782 I0123 04:42:37.656422 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:37.656635 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:38.541729 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:38.541942 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:38.656654 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:38.656783 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:39.657256 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:39.657351 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:40.542636 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:40.542815 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 04:42:40.657665 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:40.657778 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 04:42:41.658087 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:41.658207 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:42.544271 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:42.544327 11231 catalog-server.cc:232] Catalog Version: 45782 Last Catalog Version: 45782 I0123 04:42:42.544421 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:42.658938 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:42.659031 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:43.659807 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:43.659893 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:44.545102 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:44.545305 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:44.659950 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:44.660042 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:45.661128 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:45.661218 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 04:42:46.545851 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 04:42:46.546035 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 04:42:46.661837 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 04:42:46.661957 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns