I0123 01:09:00.824617 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:00.925530 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:00.925745 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:01.825007 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:01.825176 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:02.826275 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:02.826412 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:02.926458 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:02.926578 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:03.827126 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:03.827302 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:04.828047 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:04.828187 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:04.927287 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:04.927455 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:05.828898 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:05.829074 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:06.829983 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:06.830145 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:06.928088 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:06.928215 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:07.830745 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:07.830929 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:08.830965 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:08.831077 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:08.928997 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:08.929131 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:09.831748 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:09.831965 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:10.831665 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:10.831768 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:10.930014 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:10.930142 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:11.831583 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:11.831691 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:11.993921 18960 webserver.cc:417] Rendering page /jsonmetrics took 1557.85K clock cycles I0123 01:09:12.831396 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:12.831543 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:12.930675 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:12.930829 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:13.832317 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:13.832450 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:14.832437 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:14.832536 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:14.931586 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:14.931730 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:15.833088 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:15.833202 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:16.833552 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:16.833658 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:16.932453 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:16.932591 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:17.833915 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:17.834019 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:18.834924 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:18.835016 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:18.933284 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:18.933476 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:19.502224 11222 CatalogServiceCatalog.java:200] Reloading cache pool names from HDFS I0123 01:09:19.836000 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:19.836189 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:20.837355 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:20.837458 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:20.934279 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:20.934415 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:21.536444 16373 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.16:60379) I0123 01:09:21.536617 16373 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = false, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "sa_issue_rating", }, } I0123 01:09:21.537744 16373 CatalogServiceCatalog.java:946] Invalidating table metadata: history.sa_issue_rating I0123 01:09:21.558411 16373 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43397, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43397, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "sa_issue_rating", 04: id (i32) = 18763, }, }, }, } I0123 01:09:21.558535 16373 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.16:60379 took 22.000ms I0123 01:09:21.838306 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:21.838425 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:22.839215 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:22.839303 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:22.934955 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:22.935070 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:23.016211 11226 catalog-server.cc:316] Publishing update: TABLE:history.sa_issue_rating@43397 I0123 01:09:23.029263 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43397 I0123 01:09:23.839794 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:23.840029 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:24.841284 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:24.841403 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:24.935655 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:24.935783 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:25.842483 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:25.842700 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:26.842998 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:26.843132 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:26.935721 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:26.935838 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:27.844359 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:27.844475 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:28.844480 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:28.844586 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:28.936532 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:28.936691 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:29.538645 11233 authentication.cc:300] Sasl general option auto_transition requested I0123 01:09:29.538691 11233 authentication.cc:300] Sasl general option mech_list requested I0123 01:09:29.555449 11233 authentication.cc:300] Sasl general option canon_user_plugin requested I0123 01:09:29.555467 11233 authentication.cc:323] Attempting to authenticate user "impala/ph-hdp-prd-dn01@PHONEHOME.VMWARE.COM" I0123 01:09:29.555500 11233 authentication.cc:425] Successfully authenticated principal "impala/ph-hdp-prd-dn01@PHONEHOME.VMWARE.COM" on an internal connection I0123 01:09:29.555938 11233 thread.cc:289] Started thread 18966 - thrift-server:CatalogService-26 I0123 01:09:29.557226 18966 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.15:48708) I0123 01:09:29.557312 18966 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "bundle", }, } I0123 01:09:29.558061 18966 CatalogServiceCatalog.java:836] Refreshing table metadata: history.bundle I0123 01:09:29.583124 18966 Table.java:161] Loading column stats for table: bundle I0123 01:09:29.626976 18966 Column.java:69] col stats: collection__fk #distinct=555701 I0123 01:09:29.627068 18966 Column.java:69] col stats: envelope_ts #distinct=5258683 I0123 01:09:29.627122 18966 Column.java:69] col stats: pa__arrival_ts #distinct=17700588 I0123 01:09:29.627182 18966 Column.java:69] col stats: pa__kafka_partition_offset #distinct=8213797 I0123 01:09:29.627234 18966 Column.java:69] col stats: pa__os_language #distinct=14 I0123 01:09:29.627291 18966 Column.java:69] col stats: pa__processed_ts #distinct=19070410 I0123 01:09:29.627348 18966 Column.java:69] col stats: pa__detected_proxy_sources #distinct=8 I0123 01:09:29.627401 18966 Column.java:69] col stats: pa__client_ip_path #distinct=111645 I0123 01:09:29.627460 18966 Column.java:69] col stats: pa__bundle__fk #distinct=23103018 I0123 01:09:29.627526 18966 Column.java:69] col stats: collector_instance_id #distinct=925201 I0123 01:09:29.627638 18966 Column.java:69] col stats: pa__kafka_partition #distinct=1 I0123 01:09:29.627751 18966 Column.java:69] col stats: pa__is_external #distinct=2 I0123 01:09:29.627832 18966 Column.java:69] col stats: pa__proxy_source #distinct=4 I0123 01:09:29.627923 18966 Column.java:69] col stats: size_in_bytes #distinct=322605 I0123 01:09:29.627987 18966 Column.java:69] col stats: id #distinct=23103018 I0123 01:09:29.628046 18966 Column.java:69] col stats: pa__collector_instance_id #distinct=1050297 I0123 01:09:29.628098 18966 Column.java:69] col stats: ext #distinct=2 I0123 01:09:29.628175 18966 Column.java:69] col stats: internal_id #distinct=10202831 I0123 01:09:29.628233 18966 HdfsTable.java:1038] incremental update for table: history.bundle I0123 01:09:29.628296 18966 HdfsTable.java:1103] sync table partitions: bundle I0123 01:09:29.844916 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:29.845054 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:30.195516 18966 HdfsTable.java:1416] loading file metadata for 18137 partitions I0123 01:09:30.846204 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:30.846331 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:30.937296 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:30.937455 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:31.847173 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:31.847347 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:32.847925 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:32.848031 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:32.938005 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:32.938127 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:32.989106 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:32.990497 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:32.994843 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:32.997303 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:32.999042 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:33.000326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:33.002928 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:33.004544 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:33.005928 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:33.010572 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:33.012055 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:33.017048 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:33.021435 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:33.024137 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:33.025303 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:33.029639 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:33.033298 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:33.037317 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:33.039255 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:33.040370 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:33.042846 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:33.046885 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:33.050973 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:33.054858 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:33.058563 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:33.063150 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:33.065493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:33.070885 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:33.074947 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:33.076393 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:33.078384 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:33.083093 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:33.086962 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:33.088588 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:33.089823 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:33.092224 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:33.093473 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:33.094741 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:33.095978 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:33.097245 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:33.098873 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:33.099999 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:33.101238 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:33.102519 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:33.103703 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:33.104753 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:33.105849 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:33.106989 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:33.108253 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:33.109493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:33.112275 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:33.849153 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:33.849433 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:34.544598 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:34.546169 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:34.547580 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:34.549027 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:34.550341 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:34.552088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:34.553970 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:34.555408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:34.556915 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:34.558558 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:34.559821 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:34.561077 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:34.562463 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:34.563681 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:34.565026 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:34.566190 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:34.567562 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:34.568861 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:34.570149 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:34.571519 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:34.572688 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:34.574038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:34.575359 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:34.577052 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:34.578630 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:34.579776 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:34.581161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:34.582346 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:34.583583 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:34.584878 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:34.586073 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:34.587389 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:34.588677 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:34.590783 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:34.592046 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:34.593430 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:34.594676 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:34.597826 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:34.599105 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:34.600484 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:34.601598 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:34.602715 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:34.604279 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:34.605480 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:34.606657 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:34.607952 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:34.609266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:34.611071 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:34.612324 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:34.613546 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:34.614816 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:34.616013 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:34.620098 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:34.621557 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:34.623026 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:34.624380 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:34.625706 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:34.626943 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:34.628156 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:34.629328 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:34.630828 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:34.631937 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:34.633198 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:34.634207 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:34.635408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:34.636874 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:34.638067 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:34.639220 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:34.640386 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:34.641461 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:34.642639 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:34.644671 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:34.645725 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:34.647110 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:34.648201 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:34.649396 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:34.651255 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:34.652420 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:34.653733 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:34.654999 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:34.656229 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:34.657596 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:34.658782 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:34.660112 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:34.661567 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:34.662775 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:34.664721 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:34.665896 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:34.667069 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:34.668311 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:34.669648 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:34.671066 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:34.672430 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:34.673622 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:34.674785 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:34.676113 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:34.677592 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:34.679172 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:34.680500 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:34.681696 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:34.684639 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:34.685717 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:34.687038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:34.688130 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:34.689260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:34.691671 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:34.692834 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:34.693929 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:34.695042 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:34.696116 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:34.698593 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:34.699609 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:34.700727 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:34.701856 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:34.702937 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:34.705063 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:34.706168 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:34.707320 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:34.708600 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:34.709746 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:34.712218 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:34.713556 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:34.714685 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:34.715837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:34.719033 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:34.720224 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:34.721264 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:34.722410 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:34.723594 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:34.724958 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:34.726104 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:34.727308 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:34.728339 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:34.729360 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:34.730489 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:34.732018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:34.733057 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:34.734272 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:34.735436 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:34.736470 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:34.738742 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:34.739763 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:34.740886 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:34.742205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:34.778591 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:34.779819 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:34.781039 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:34.782199 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:34.784413 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:34.785497 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:34.786921 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:34.788096 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:34.789243 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:34.791045 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:34.792279 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:34.793424 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:34.794899 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:34.796113 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:34.797415 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:34.798423 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:34.799553 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:34.800722 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:34.801898 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:34.803967 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:34.805156 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:34.806499 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:34.807576 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:34.808843 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:34.810372 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:34.811535 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:34.812887 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:34.814029 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:34.815160 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:34.817088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:34.818292 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:34.819422 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:34.820493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:34.822906 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:34.824234 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:34.825476 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:34.826599 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:34.827561 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:34.829645 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:34.830926 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:34.832275 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:34.833426 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:34.834516 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:34.836457 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:34.837786 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:34.839083 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:34.840286 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:34.841447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:34.843068 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:34.844158 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:34.845172 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:34.846242 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:34.847354 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:34.849512 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:34.850190 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:34.850282 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:34.850611 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:34.852051 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:34.853266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:34.854363 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:34.856087 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:34.857329 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:34.858594 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:34.859817 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:34.860942 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:34.862798 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:34.864245 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:34.865416 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:34.866484 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:34.867516 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:34.869046 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:34.870249 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:34.871616 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:34.873088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:34.874644 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:34.876132 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:34.877815 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:34.879092 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:34.880154 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:34.882551 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:34.883741 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:34.885030 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:34.886345 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:34.889142 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:34.890411 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:34.891573 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:34.892807 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:34.893966 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:34.895973 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:34.897167 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:34.898342 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:34.899528 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:34.900683 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:34.902200 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:34.903421 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:34.904634 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:34.905956 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:34.907266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:34.908995 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:34.910182 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:34.911391 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:34.912477 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:34.913697 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:34.915029 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:34.916249 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:34.917577 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:34.919067 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:34.920264 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:34.921380 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:34.923171 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:34.924288 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:34.925930 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:34.927063 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:34.928138 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:34.930145 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:34.931288 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:34.932344 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:34.933334 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:34.934504 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:34.936029 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:34.937099 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:34.938369 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:34.938741 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:34.938843 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:34.939527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:34.940811 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:34.942571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:34.943720 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:34.944914 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:34.946125 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:34.947413 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:34.948930 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:34.950227 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:34.951344 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:34.952525 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:34.953606 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:34.954887 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:34.956071 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:34.957324 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:34.958443 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:34.959480 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:34.960503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:34.962157 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:34.963205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:34.964257 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:34.965327 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:34.976421 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:34.977993 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:34.981328 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:34.982597 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:34.983963 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:34.985152 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:34.986366 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:34.988262 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:34.989529 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:34.990835 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:34.992044 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:34.994647 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:34.995909 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:34.997009 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:34.998251 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:34.999447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:35.001420 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:35.002878 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:35.004153 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:35.005515 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:35.007972 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:35.009341 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:35.011044 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:35.012676 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:35.014111 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:35.015316 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:35.016436 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:35.017658 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:35.020042 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:35.021592 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:35.022936 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:35.024039 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:35.025116 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:35.026442 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:35.028093 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:35.029574 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:35.030772 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:35.032161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:35.033915 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:35.035034 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:35.036068 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:35.037350 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:35.038795 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:35.040961 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:35.042171 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:35.043404 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:35.044461 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:35.045611 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:35.047528 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:35.049293 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:35.050568 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:35.051631 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:35.053153 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:35.054381 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:35.055492 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:35.056676 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:35.057847 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:35.059049 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:35.061154 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:35.062235 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:35.063256 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:35.064487 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:35.065582 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:35.067823 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:35.069032 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:35.070137 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:35.071208 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:35.072314 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:35.074331 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:35.075486 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:35.076663 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:35.077802 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:35.078990 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:35.081048 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:35.082106 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:35.083214 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:35.084266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:35.085328 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:35.086488 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:35.088062 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:35.089246 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:35.090569 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:35.091771 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:35.093009 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:35.094084 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:35.095371 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:35.096467 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:35.097590 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:35.099097 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:35.101594 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:35.102874 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:35.104074 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:35.105260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:35.106679 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:35.108530 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:35.109688 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:35.111076 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:35.112345 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:35.113504 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:35.116231 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:35.118396 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:35.119508 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:35.120525 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:35.123029 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:35.124127 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:35.125270 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:35.126332 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:35.127473 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:35.129046 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:35.130277 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:35.131444 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:35.132781 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:35.133981 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:35.136003 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:35.137393 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:35.138463 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:35.140002 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:35.142137 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:35.143326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:35.144608 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:35.145825 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:35.147084 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:35.148187 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:35.149283 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:35.150310 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:35.151322 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:35.152340 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:35.153432 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:35.155711 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:35.157058 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:35.158324 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:35.159448 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:35.160457 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:35.162139 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:35.163444 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:35.164573 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:35.165784 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:35.168048 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:35.170091 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:35.171205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:35.172268 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:35.173455 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:35.174535 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:35.177078 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:35.190771 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:35.192018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:35.193240 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:35.194259 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:35.196238 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:35.197702 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:35.198982 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:35.200006 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:35.201141 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:35.202807 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:35.204092 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:35.205298 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:35.206431 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:35.207527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:35.209028 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:35.210404 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:35.212250 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:35.213517 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:35.215378 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:35.216583 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:35.217700 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:35.218806 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:35.219923 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:35.221122 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:35.222311 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:35.223433 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:35.224503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:35.225637 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:35.227013 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:35.228401 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:35.229522 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:35.231081 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:35.232161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:35.233167 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:35.234834 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:35.236183 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:35.237180 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:35.238481 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:35.239629 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:35.240964 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:35.242141 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:35.243243 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:35.244369 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:35.245527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:35.247629 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:35.248836 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:35.250066 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:35.251093 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:35.252115 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:35.254088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:35.255297 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:35.256358 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:35.257406 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:35.258791 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:35.260098 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:35.261148 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:35.262220 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:35.263454 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:35.264581 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:35.266403 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:35.267572 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:35.269009 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:35.270228 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:35.271419 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:35.273372 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:35.274494 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:35.275640 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:35.276850 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:35.278018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:35.279130 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:35.280334 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:35.281612 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:35.282629 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:35.283972 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:35.286432 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:35.287576 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:35.288918 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:35.290047 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:35.291159 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:35.293026 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:35.294127 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:35.295258 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:35.296388 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:35.300053 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:35.301174 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:35.302176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:35.303146 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:35.304177 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:35.306000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:35.307117 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:35.308169 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:35.309198 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:35.310329 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:35.312378 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:35.313694 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:35.314786 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:35.316054 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:35.317194 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:35.319048 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:35.320240 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:35.321292 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:35.322386 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:35.323421 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:35.325824 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:35.326977 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:35.328065 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:35.329419 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:35.330395 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:35.332105 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:35.333155 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:35.334285 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:35.335546 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:35.336625 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:35.337862 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:35.339038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:35.340171 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:35.341296 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:35.342453 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:35.343387 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:35.344527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:35.345785 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:35.346966 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:35.348505 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:35.349834 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:35.351315 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:35.352452 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:35.353560 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:35.354842 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:35.356111 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:35.358506 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:35.359807 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:35.361042 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:35.362181 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:35.363453 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:35.364946 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:35.366091 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:35.367307 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:35.368762 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:35.369946 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:35.371135 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:35.372301 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:35.373571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:35.374702 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:35.375852 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:35.851277 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:35.851385 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:36.851932 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:36.852044 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:36.939404 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:36.939532 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:37.853538 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:37.853777 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:37.914446 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:37.915859 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:37.916836 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:37.920997 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:37.922485 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:37.923661 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:37.925185 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:37.926230 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:37.930030 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:37.932974 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:37.936964 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:37.943230 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:37.945500 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:37.956681 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:37.957844 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:37.959237 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:37.961513 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:37.962523 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:37.963855 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:37.964974 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:37.966166 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:37.968181 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:37.969339 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:37.970502 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:37.971745 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:37.972966 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:37.974485 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:37.975674 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:37.976842 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:37.977939 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:37.979257 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:37.981179 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:37.982270 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:37.983386 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:37.984611 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:37.985664 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:37.986894 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:37.989343 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:37.990399 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:37.991678 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:37.993062 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:37.995023 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:37.996168 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:37.997375 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:37.998391 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:37.999632 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:38.000784 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:38.002472 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:38.003512 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:38.004724 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:38.006162 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:38.007320 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:38.008523 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:38.009606 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:38.010680 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:38.011755 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:38.012955 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:38.014259 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:38.015868 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:38.017207 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:38.018493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:38.019696 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:38.020691 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:38.022555 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:38.023730 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:38.024978 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:38.026160 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:38.027253 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:38.028475 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:38.029474 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:38.030603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:38.032068 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:38.033393 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:38.034823 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:38.036077 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:38.037302 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:38.038358 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:38.039494 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:38.040650 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:38.042867 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:38.044005 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:38.045459 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:38.046582 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:38.047730 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:38.049139 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:38.050303 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:38.051424 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:38.052564 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:38.053812 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:38.055225 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:38.058816 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:38.060060 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:38.066970 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:38.069761 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:38.071316 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:38.074331 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:38.076299 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:38.078770 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:38.081816 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:38.082852 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:38.083986 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:38.085194 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:38.086314 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:38.088663 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:38.090553 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:38.091635 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:38.092939 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:38.095075 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:38.096326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:38.097443 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:38.098902 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:38.100210 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:38.101908 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:38.103070 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:38.104219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:38.105689 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:38.106871 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:38.108885 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:38.110095 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:38.111269 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:38.112583 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:38.116153 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:38.117655 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:38.118818 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:38.120194 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:38.121690 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:38.123008 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:38.124107 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:38.125373 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:38.126500 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:38.129096 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:38.130218 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:38.131304 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:38.133213 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:38.134421 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:38.135921 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:38.137138 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:38.138278 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:38.139427 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:38.140427 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:38.142839 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:38.144091 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:38.145196 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:38.146335 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:38.147380 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:38.149353 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:38.150751 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:38.152073 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:38.153085 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:38.154263 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:38.156466 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:38.157608 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:38.158881 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:38.159921 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:38.167183 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:38.168246 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:38.169919 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:38.170943 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:38.172124 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:38.173240 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:38.174319 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:38.176472 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:38.177479 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:38.178882 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:38.180073 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:38.181217 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:38.182837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:38.186864 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:38.190960 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:38.194998 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:38.198104 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:38.199336 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:38.202846 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:38.204136 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:38.205152 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:38.206931 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:38.208745 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:38.210924 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:38.212431 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:38.213794 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:38.214937 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:38.216143 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:38.217250 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:38.218322 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:38.219415 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:38.220634 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:38.221889 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:38.223151 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:38.224989 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:38.226451 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:38.227537 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:38.228596 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:38.229848 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:38.231783 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:38.232975 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:38.234107 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:38.235373 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:38.236675 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:38.238651 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:38.240066 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:38.241176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:38.242337 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:38.243582 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:38.245342 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:38.246546 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:38.247733 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:38.249081 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:38.250267 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:38.251996 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:38.253199 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:38.254447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:38.255604 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:38.257107 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:38.258756 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:38.259812 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:38.261026 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:38.262240 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:38.263310 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:38.265064 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:38.266247 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:38.267396 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:38.268555 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:38.269709 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:38.271445 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:38.272570 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:38.273902 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:38.275094 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:38.276180 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:38.277247 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:38.278579 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:38.279772 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:38.281108 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:38.282307 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:38.284837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:38.286303 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:38.287413 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:38.288635 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:38.291163 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:38.292326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:38.293357 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:38.294483 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:38.295547 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:38.296563 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:38.298061 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:38.299172 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:38.300256 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:38.301318 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:38.302536 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:38.304785 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:38.306051 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:38.307289 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:38.308431 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:38.310613 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:38.311791 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:38.314990 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:38.316992 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:38.318465 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:38.320016 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:38.321805 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:38.324868 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:38.326694 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:38.328975 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:38.332870 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:38.335618 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:38.336908 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:38.338006 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:38.339190 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:38.340260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:38.341801 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:38.342977 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:38.344302 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:38.345471 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:38.346596 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:38.347642 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:38.348821 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:38.350457 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:38.351686 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:38.352828 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:38.353999 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:38.355247 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:38.356421 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:38.357487 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:38.358856 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:38.360097 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:38.361188 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:38.362244 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:38.363833 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:38.364861 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:38.366158 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:38.367218 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:38.368197 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:38.370401 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:38.371578 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:38.372699 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:38.374096 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:38.375222 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:38.377159 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:38.378257 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:38.385905 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:38.387112 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:38.388217 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:38.389869 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:38.391131 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:38.392308 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:38.393375 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:38.394377 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:38.395511 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:38.396522 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:38.397522 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:38.398735 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:38.399773 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:38.400815 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:38.402103 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:38.403157 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:38.404247 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:38.405217 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:38.406278 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:38.407425 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:38.409540 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:38.410759 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:38.411839 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:38.413015 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:38.414232 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:38.416079 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:38.417260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:38.418450 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:38.419581 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:38.420720 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:38.422755 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:38.424011 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:38.425120 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:38.426290 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:38.427285 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:38.429163 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:38.430383 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:38.431704 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:38.432763 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:38.433861 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:38.437527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:38.441469 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:38.442792 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:38.445370 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:38.446650 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:38.447762 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:38.449093 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:38.450430 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:38.453224 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:38.457124 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:38.461120 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:38.464299 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:38.465415 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:38.466408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:38.467530 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:38.469012 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:38.470227 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:38.471369 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:38.472605 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:38.473738 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:38.474715 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:38.475944 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:38.477038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:38.478262 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:38.479408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:38.480571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:38.481540 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:38.483042 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:38.484175 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:38.485285 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:38.486481 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:38.487979 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:38.489413 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:38.490692 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:38.491839 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:38.493202 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:38.494503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:38.497187 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:38.498693 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:38.500002 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:38.501258 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:38.503770 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:38.504904 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:38.506189 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:38.507503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:38.508708 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:38.510027 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:38.511176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:38.512204 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:38.513500 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:38.514744 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:38.516697 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:38.518162 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:38.519390 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:38.520661 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:38.522264 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:38.523555 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:38.524633 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:38.526046 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:38.527191 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:38.528298 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:38.530089 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:38.531111 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:38.533176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:38.534338 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:38.536183 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:38.537288 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:38.538516 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:38.539571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:38.540607 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:38.541708 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:38.543436 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:38.544595 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:38.545701 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:38.546854 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:38.549428 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:38.550716 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:38.552072 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:38.553225 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:38.554400 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:38.556592 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:38.558001 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:38.559231 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:38.560398 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:38.561647 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:38.562897 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:38.565943 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:38.567224 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:38.569118 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:38.570276 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:38.573762 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:38.575078 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:38.576237 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:38.577466 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:38.581009 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:38.585899 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:38.588944 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:38.593991 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:38.597088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:38.601044 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:38.605093 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:38.609079 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:38.613754 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:38.616582 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:38.625294 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:38.626464 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:38.630106 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:38.631325 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:38.632467 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:38.633594 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:38.634671 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:38.637030 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:38.638211 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:38.639443 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:38.640501 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:38.641666 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:38.643451 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:38.644732 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:38.645756 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:38.646824 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:38.648007 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:38.650161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:38.651227 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:38.652396 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:38.653476 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:38.655072 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:38.656316 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:38.657318 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:38.658342 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:38.659530 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:38.660732 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:38.663004 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:38.664218 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:38.665228 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:38.667250 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:38.669299 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:38.670739 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:38.671838 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:09:38.673362 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:09:38.675981 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:38.677170 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:09:38.678297 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:09:38.679539 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:09:38.680562 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:09:38.682176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:09:38.683328 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:09:38.684584 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:09:38.685837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:09:38.687000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:09:38.688279 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:09:38.689476 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:38.690805 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:09:38.692165 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:09:38.693387 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:09:38.695008 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:09:38.696146 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:38.697576 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:38.698653 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:38.699728 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:38.701027 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:38.702188 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:38.703467 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:38.704612 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:38.705773 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:38.706751 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:38.708828 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:38.709833 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:38.710989 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:38.712024 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:38.713254 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:38.715165 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:38.716270 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:38.722612 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:38.725100 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:38.729936 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:38.734947 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:38.736276 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:38.740731 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:38.741945 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:38.744329 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:38.745482 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:38.746628 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:38.748405 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:38.750372 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:38.751550 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:38.752912 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:38.754941 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:38.756161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:38.757884 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:38.759145 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:38.760501 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:38.761740 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:38.762856 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:38.764067 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:38.765141 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:38.766175 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:38.768256 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:38.769503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:38.770568 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:38.771572 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:38.772809 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:38.774205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:38.775202 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:38.776239 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:38.777384 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:38.778491 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:38.779577 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:38.780890 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:38.782238 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:38.783481 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:38.784677 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:38.786909 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:38.788188 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:38.789291 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:38.790930 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:38.792166 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:38.794209 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:38.795326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:38.796802 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:38.798085 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:38.799234 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:38.800578 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:38.801663 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:38.802880 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:38.803910 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:38.804944 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:38.807188 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:38.808380 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:38.809603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:38.810721 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:38.811856 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:38.813374 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:38.815300 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:38.816980 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:38.818260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:38.819650 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:38.820870 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:38.822144 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:38.823478 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:38.824517 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:38.826545 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:38.827854 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:38.829138 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:38.830252 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:38.831398 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:38.832943 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:38.834226 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:38.835989 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:38.853370 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:38.853494 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:38.940165 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:38.940364 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:38.968216 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:38.969527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.253514 6590 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.16:55088) I0123 01:09:39.253684 6590 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "sa_issue_rating", }, }, }, } I0123 01:09:39.253973 6590 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 01:09:39.254098 6590 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.16:55088 took 1.000ms I0123 01:09:39.254395 11211 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 01:09:39.254827 11891 TableLoader.java:59] Loading metadata for: history.sa_issue_rating I0123 01:09:39.288615 11891 Table.java:161] Loading column stats for table: sa_issue_rating I0123 01:09:39.331853 11891 Column.java:69] col stats: issue_severity #distinct=1 I0123 01:09:39.331939 11891 Column.java:69] col stats: issue_id #distinct=1 I0123 01:09:39.332001 11891 Column.java:69] col stats: issue_type #distinct=1 I0123 01:09:39.332067 11891 Column.java:69] col stats: rating #distinct=1 I0123 01:09:39.332126 11891 Column.java:69] col stats: pa__arrival_ts #distinct=1 I0123 01:09:39.332221 11891 Column.java:69] col stats: pa__bundle__fk #distinct=1 I0123 01:09:39.332288 11891 Column.java:69] col stats: pa__collector_instance_id #distinct=1 I0123 01:09:39.332353 11891 Column.java:69] col stats: pa__processed_ts #distinct=1 I0123 01:09:39.332412 11891 Column.java:69] col stats: timestamp #distinct=1 I0123 01:09:39.332547 11891 Column.java:69] col stats: username #distinct=1 I0123 01:09:39.332614 11891 Column.java:69] col stats: pa__is_external #distinct=2 I0123 01:09:39.332675 11891 Column.java:69] col stats: id #distinct=1 I0123 01:09:39.332777 11891 Column.java:69] col stats: issue_affected_object_type #distinct=1 I0123 01:09:39.332836 11891 HdfsTable.java:1030] load table from Hive Metastore: history.sa_issue_rating I0123 01:09:39.338714 11891 MetaStoreUtil.java:129] Fetching 1 partitions for: history.sa_issue_rating using partition batch size: 1000 I0123 01:09:39.366842 11891 HdfsTable.java:348] load block md for sa_issue_rating file 78490243b4a679d3-8f1949ed00000000_1260708045_data.0.parq I0123 01:09:39.371685 11891 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:726) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 17 more I0123 01:09:39.372686 11891 HdfsTable.java:441] Loading disk ids for: history.sa_issue_rating. nodes: 3. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:09:39.398358 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.403126 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.406908 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.411178 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.413849 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.415284 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.416581 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.418115 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.420205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.421330 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.422493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.423599 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.424862 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.426450 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.428239 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.429394 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.430394 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.431386 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.433143 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.448124 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.449481 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.450573 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.453094 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.454113 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.455253 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.456266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.457355 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.458438 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.460839 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.462100 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.463306 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:39.464265 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.465330 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:39.467066 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:39.468505 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:39.469657 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:39.470790 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:39.472362 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:39.473538 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:39.474939 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:39.476135 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:39.477916 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:39.479205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.480273 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:39.481554 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:39.482740 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:39.484051 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:39.486940 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:39.488673 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:39.489728 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:39.490687 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:39.491786 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:39.493469 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:39.494735 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.495860 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:39.497004 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:39.498096 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:39.501688 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:39.502887 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:39.503964 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:39.505157 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:39.506487 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:39.507872 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:39.509160 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:39.510330 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.511494 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:39.512589 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:39.515128 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:39.517195 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:39.524817 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.529942 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.531136 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.534986 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.545513 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.550122 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.551905 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.554067 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.558511 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.561159 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.565201 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.571053 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.573110 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.574995 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.576370 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.577371 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:39.578685 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.580062 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:39.581209 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:39.582747 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:39.583885 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:39.585117 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:39.586212 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:39.587393 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:39.589205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:39.590550 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:39.591811 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:39.593091 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.594135 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:39.595597 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:39.596690 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.598019 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.599094 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.600136 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.601845 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.602818 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.612251 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.613447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.615306 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.616304 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.618175 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.619349 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.621387 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.622763 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.623837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.624816 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.625864 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.628682 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.629789 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.630725 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.631870 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.644667 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.645805 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.647763 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.648854 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.649840 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.651057 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.652163 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.653343 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.654546 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.655530 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.656524 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.657465 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:39.658506 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.660711 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:39.661705 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:39.662768 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:39.663847 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:39.665171 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:39.666987 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:39.668187 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:39.669299 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:39.670269 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:39.671473 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:39.672940 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.674124 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:39.677646 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:39.682875 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:39.685134 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:39.691011 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:39.692256 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:39.698184 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:39.701994 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:39.703276 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:39.704254 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:39.706089 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.707274 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:39.708513 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:39.709486 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:39.710517 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:39.712344 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:39.713563 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:39.714649 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:39.715836 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:39.717161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:39.718880 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:39.720350 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.721632 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:39.722795 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:39.723935 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:39.726116 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:39.727272 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.728350 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.729615 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.730789 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.754096 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.755425 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.756753 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.757992 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.759223 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.762503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.764223 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.765457 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.766659 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.769664 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.770725 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.771842 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:39.772933 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.774101 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:39.775233 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:39.777412 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:39.778620 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:39.779635 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:39.780694 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:39.781615 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:39.782675 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:39.784116 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:39.785284 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:39.786278 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.787446 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:39.788409 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:39.789587 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:39.790663 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:39.791616 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:39.792778 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:39.793747 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:39.795033 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:39.797044 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:39.798163 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:39.799326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.800432 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:39.801542 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:39.805277 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:39.809378 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:39.813509 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:39.816917 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:39.819067 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:39.820293 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:39.823112 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:39.825016 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:39.827117 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.829442 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:39.830513 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:39.831468 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:39.833083 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:39.834187 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.836587 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.837656 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.839861 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:39.847956 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:39.849005 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:39.850266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:39.851280 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:39.852434 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:39.853405 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:39.854080 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:39.854182 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:39.854511 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:39.856881 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:39.858000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:39.859017 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:39.860060 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:39.860968 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:39.861953 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:39.863464 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:39.864680 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:39.865648 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:39.866541 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:39.867671 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:39.869086 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:39.869976 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:39.870966 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:39.871932 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:39.873281 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:39.874447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:39.875900 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:39.877547 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:39.878855 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:39.879928 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:39.881237 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:39.882529 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:39.884335 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:39.885355 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:39.886472 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:39.887590 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:39.888795 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:39.890982 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:39.892122 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:39.893061 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:39.894229 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:39.895289 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:39.897521 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:39.899859 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:39.901237 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:39.902333 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:39.903302 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:39.904520 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:39.905613 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:39.906677 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:39.908082 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:39.909273 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:39.910965 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:39.912267 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:39.913408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:39.914464 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:40.854451 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:40.854552 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:40.941098 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:40.941289 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:41.854696 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:41.854842 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:42.854876 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:42.854980 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:42.941869 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:42.942121 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:43.855763 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:43.855921 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:44.270668 14317 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.19:35818) I0123 01:09:44.270807 14317 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "ph_dow_20170123_010826_bundle", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics@PHONEHOME.VMWARE.COM", }, } I0123 01:09:44.271327 14317 CatalogOpExecutor.java:1156] Dropping table/view staging.ph_dow_20170123_010826_bundle I0123 01:09:44.855703 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:44.855891 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:44.942632 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:44.942750 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:45.856109 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:45.856215 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:46.856864 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:46.857046 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:46.943274 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:46.943409 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:47.856508 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:47.856618 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:48.856992 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:48.857095 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:48.944198 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:48.944327 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:49.857755 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:49.857971 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:50.857511 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:50.857741 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:50.945041 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:50.945196 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:51.797026 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:51.798777 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:51.800258 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:51.801744 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:51.803092 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:51.806881 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:51.810289 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:51.811609 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:51.813025 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:51.814893 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:51.815999 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:51.817807 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:51.818886 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:51.821094 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:51.822088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:51.823204 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:51.824291 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:51.825455 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:51.827172 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:51.828305 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:51.829243 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:51.830219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:51.831315 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:51.832455 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:51.833897 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:51.835036 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:51.836015 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:51.836916 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:51.837980 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:51.839119 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:51.840246 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:51.841429 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:51.842417 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:51.843399 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:51.844347 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:51.846484 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:51.847530 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:51.848625 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:51.849689 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:51.850713 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:51.852900 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:51.853936 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:51.855002 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:51.856118 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:51.857122 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:51.857952 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:51.858095 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:51.858556 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:51.859652 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:51.860795 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:51.861852 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:51.862983 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:51.864073 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:51.865041 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:51.866133 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:51.867138 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:51.867993 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:51.869060 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:51.870095 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:51.872231 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:51.873340 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:51.874424 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:51.875427 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:51.876446 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:51.878643 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:51.879936 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:51.881227 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:51.882984 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:51.884287 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:51.885510 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:51.886823 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:51.888062 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:51.889199 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:51.890275 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:51.892184 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:51.893430 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:51.894661 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:51.895931 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:51.897236 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:51.898452 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:51.899626 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:51.900749 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:51.901904 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:51.902997 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:51.904176 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:51.905279 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:51.906435 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:51.907838 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:51.908936 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:51.910225 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:51.915088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:51.916323 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:51.923220 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:51.924561 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:51.929177 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:51.933323 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:51.934384 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:51.935457 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:51.936687 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:51.940476 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:51.941829 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:51.943048 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:51.944120 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:51.945098 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:51.946694 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:51.947962 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:51.949301 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:51.953348 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:51.954887 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:51.956687 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:51.958658 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:52.095182 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:52.097623 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:52.098565 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:52.099503 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:52.100395 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:52.101403 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:52.102412 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:52.104219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:52.105126 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:52.106185 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:52.107206 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:52.108286 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:52.109977 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:52.111032 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:52.112082 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:52.113101 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:52.114282 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:52.115353 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:52.117079 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:52.118175 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:52.119365 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:52.120404 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:52.121366 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:52.122401 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:52.124061 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:52.125322 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:52.126564 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:52.127568 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:52.128540 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:52.130511 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:52.131633 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:52.132880 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:52.133857 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:52.135061 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:52.136186 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:52.138249 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:52.139282 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:52.140444 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:52.141499 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:52.142467 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:52.145020 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:52.146214 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:52.147277 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:52.148244 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:52.149305 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:52.150315 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:52.151756 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:52.166656 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:52.167879 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:52.169088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:52.169986 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:52.172374 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:52.173480 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:52.174427 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:52.175504 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:52.176625 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:52.178045 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:52.179144 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:52.180189 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:52.181301 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:52.182435 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:52.183526 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:52.185031 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:52.186331 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:52.187382 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:52.188459 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:52.189450 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:52.193102 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:52.197571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:52.201128 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:52.205710 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:52.209000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:52.213829 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:52.215090 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:52.216984 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:52.218147 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:52.219209 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:52.220398 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:52.221477 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:52.223826 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:52.224855 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:52.225917 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:52.226935 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:52.228078 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:52.229588 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:52.230664 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:52.231626 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:52.232625 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:52.233610 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:52.234632 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:52.237318 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:52.238687 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:52.239903 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:52.241006 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:52.243026 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:52.244356 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:52.245446 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:52.246511 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:52.247521 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:52.249506 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:52.250583 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:52.251577 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:52.252626 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:52.253882 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:52.254914 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:52.256032 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:52.257194 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:52.258258 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:52.804226 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:52.805758 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:52.806767 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:52.807709 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:52.808753 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:52.809746 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:52.811885 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:52.813100 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:52.814146 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:52.815222 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:52.816213 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:52.817414 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:52.818500 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:52.819833 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:52.820854 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:52.821796 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:52.822692 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:52.823623 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:52.825006 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:52.825997 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:52.827136 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:52.828251 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:52.829360 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:52.831789 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:52.832942 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:52.834000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:52.835034 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:52.836602 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:52.838553 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:52.839576 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:52.840571 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:52.841954 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:52.843152 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:52.844240 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:52.848978 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:52.853055 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:52.857300 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:52.858891 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:52.858989 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:52.861552 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:52.863155 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:52.865228 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:52.869292 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:52.871276 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:52.872412 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:52.873497 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:52.874578 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:52.875563 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:52.879011 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:52.879994 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:52.881083 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:52.882081 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:52.883332 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:52.885120 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:52.886287 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:52.887250 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:52.888252 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:52.889307 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:52.890485 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:52.892069 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:52.893311 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:52.894400 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:52.895570 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:52.896636 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:52.899091 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:52.900252 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:52.901253 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:52.902232 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:52.903257 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:52.906075 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:52.907114 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:52.908150 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:52.909272 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:52.910421 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:52.912652 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:52.913902 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:52.915220 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:52.916429 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:52.917428 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:52.918603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:52.920104 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:52.921066 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:52.922097 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:52.923228 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:52.924432 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:52.927569 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:52.928652 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:52.930219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:52.931300 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:52.932329 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:52.933835 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:52.934907 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:52.935982 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:52.936969 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:52.937994 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:52.940397 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:52.941434 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:52.942536 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:52.943567 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:52.944802 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:52.945691 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:52.945783 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:52.945958 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:52.947034 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:52.948055 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:52.949264 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:52.950316 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:52.951392 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:52.952424 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:52.954430 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:52.955472 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:52.956625 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:52.957898 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:52.959055 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:52.960911 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:52.962996 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:52.964190 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:52.965268 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:52.967762 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:52.968909 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:52.970000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:52.971320 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:52.972494 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:52.977151 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:52.981729 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:52.985208 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:52.989814 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:52.992161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:52.997105 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:52.998123 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:53.000216 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:53.026247 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:53.028373 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:53.029690 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:53.030756 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:53.032610 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:53.033697 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:53.034765 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:53.036265 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:53.038028 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:53.041282 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:53.042610 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:53.045763 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:53.049428 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:53.051892 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:53.053087 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:53.054337 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:53.058524 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:53.059671 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:53.060551 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:53.061600 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:53.062515 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:53.064314 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:53.065459 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:53.069097 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:53.073254 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:53.074347 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:53.075542 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:53.078336 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:53.080389 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:53.081881 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:53.083277 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:53.084543 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:53.088088 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:53.089118 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:53.090224 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:53.092308 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:53.097110 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:53.098245 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:53.099418 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:53.100458 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:53.101389 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:53.102334 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:53.103425 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:53.104740 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:53.109050 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:53.112403 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:53.113456 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:53.114987 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:53.116150 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:53.117812 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:53.129880 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:53.133018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:53.137073 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:53.141140 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:53.145139 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:53.153096 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:53.157287 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:53.161527 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:53.165149 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:53.171151 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:53.172744 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:53.174332 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:53.175643 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:53.176728 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:53.178599 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:53.179699 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:53.180757 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:53.181783 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:53.182700 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:53.184919 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:53.186097 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:53.187211 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:53.188288 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:53.189306 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:53.191351 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:53.192585 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:53.193513 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:53.194512 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:53.195603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:53.197809 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:53.199003 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:53.200116 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:53.201573 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:53.202596 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:53.204443 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:53.205641 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:53.206710 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:53.207799 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:53.208890 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:53.210805 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:53.211935 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:53.213018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:53.214145 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:53.215438 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:53.217068 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:53.221107 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:53.227123 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:53.229425 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:53.233158 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:53.237956 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:53.239197 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:53.241060 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:53.242529 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:53.246408 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:53.249640 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:53.255108 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:53.257273 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:53.262912 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:53.265012 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:53.268734 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:53.270211 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:53.275063 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:53.278589 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:53.286806 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:53.290972 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:53.293802 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:53.298641 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:53.302618 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:53.304146 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:53.305538 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:53.307032 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:53.309963 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:53.313199 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:53.317032 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:53.322981 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:53.333178 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:53.338639 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:53.341192 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:53.342494 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:53.345814 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:53.347247 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:53.349831 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:53.353732 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:53.356945 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:53.362859 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:53.364086 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:53.365902 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:53.368170 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:53.373688 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:53.377003 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:53.381134 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:53.385771 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:53.389058 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:53.393162 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:53.396950 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:53.401135 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:53.405921 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:53.408268 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:53.413377 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:53.417068 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:53.420303 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:53.424047 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:53.431159 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:53.433640 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:53.439057 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:53.442935 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:09:53.447124 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:09:53.451109 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:09:53.454324 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:53.458955 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:53.463024 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:53.466044 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:53.467357 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:53.468744 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:53.470927 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:53.472556 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:53.474531 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:53.478981 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:53.481326 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:53.486796 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:53.490205 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:53.493232 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:53.498970 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:53.502933 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:53.505769 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:53.510995 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:53.512501 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:53.514982 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:53.517992 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:53.522995 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:53.527037 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:53.531038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:53.532423 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:53.533715 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:53.539038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:53.541244 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:53.545667 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:53.548617 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:53.550236 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:53.551378 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:53.553401 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:53.557031 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:53.563064 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:53.567049 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:53.568938 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:53.570142 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:53.573539 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:53.578052 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:53.581526 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:53.587038 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:53.590445 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:53.593219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:53.594429 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:53.595722 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:53.597589 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:53.598939 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:53.600260 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:53.602948 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:53.605944 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:53.611011 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:53.615041 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:53.617079 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:53.618371 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:53.623041 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:53.627033 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:53.628361 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:53.631017 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:53.635192 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:53.638433 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:53.640102 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:53.643034 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:53.647058 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:53.651082 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:53.652488 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:53.653815 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:53.654974 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:53.656195 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:53.657428 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:53.658990 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:53.660266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:53.661443 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:53.662603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:53.663853 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:53.665051 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:53.666151 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:53.667479 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:53.668668 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:53.669996 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:53.671182 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:53.672394 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:53.673739 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:53.675076 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:53.676041 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:53.677117 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:53.678627 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:53.679997 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:53.681732 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:53.686981 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:53.688206 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:53.689390 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:53.695041 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:53.698993 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:53.702823 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:53.728495 18966 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:09:53.733108 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:09:53.734474 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:09:53.737210 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:09:53.741659 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:09:53.745569 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:09:53.747673 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:09:53.749122 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:09:53.750486 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:09:53.751677 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:09:53.757308 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:09:53.758692 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:09:53.761139 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:09:53.762521 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:09:53.765594 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:09:53.766790 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:09:53.769274 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:09:53.770376 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:09:53.771294 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:09:53.772291 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:09:53.773493 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:09:53.774479 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:09:53.775600 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:09:53.776659 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:09:53.777802 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:09:53.779000 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:09:53.780146 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:09:53.781252 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:09:53.782269 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:09:53.786021 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:09:53.789892 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:09:53.793999 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:09:53.797662 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:09:53.801583 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:09:53.805985 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:09:53.807675 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:09:53.810978 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:09:53.814959 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:09:53.816063 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:09:53.817138 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:09:53.818271 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:09:53.819499 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:09:53.820798 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:09:53.825057 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:09:53.829952 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:09:53.833225 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:09:53.835361 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:09:53.845234 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:09:53.846550 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:09:53.848740 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:09:53.849937 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:09:53.854481 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:09:53.858656 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:53.858749 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:09:53.858896 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:09:53.863090 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:09:53.867064 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:09:53.869956 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:09:53.873054 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:09:53.877239 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:09:53.881161 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:09:53.885311 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:09:53.886447 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:09:53.887603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:09:53.888919 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:09:53.890017 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:09:53.891196 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:09:53.892246 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:09:53.894309 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:09:53.897130 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:09:53.901149 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:09:53.905213 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:09:53.910620 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:09:53.913103 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:09:53.917114 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:09:53.921648 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:09:53.925560 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:09:53.929563 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:09:53.930603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:09:53.931653 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:09:53.932725 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:09:53.938812 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:09:53.943081 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:09:53.944346 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:09:53.945391 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:09:53.946636 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:09:53.947631 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:09:53.951227 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:09:53.953603 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:09:53.954566 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:09:53.958075 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:09:53.961074 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:09:53.965777 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:09:53.968632 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:09:53.969977 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:09:53.971619 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:09:53.973177 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:09:53.975268 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:09:53.976646 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:09:53.981827 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:09:53.985219 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:09:53.989684 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:09:53.994134 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:09:53.995345 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:09:53.998616 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:09:54.002337 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:09:54.003587 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:09:54.005018 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:09:54.009239 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:09:54.013353 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:09:54.017266 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:09:54.021268 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:09:54.025353 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:09:54.028884 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:09:54.031314 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:09:54.037520 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:09:54.041285 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:09:54.045174 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:09:54.046412 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:09:54.049095 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:09:54.053226 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:09:54.054383 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:09:54.055593 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:09:54.057919 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:09:54.060837 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:09:54.061992 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:09:54.064008 18966 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:09:54.859257 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:54.859378 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:54.946468 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:54.946694 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:55.860107 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:55.860210 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:56.860816 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:56.860951 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:56.947100 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:56.947194 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:09:57.861816 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:57.862083 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:58.862201 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:58.862305 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:09:58.947693 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:09:58.947957 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:09:59.862944 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:09:59.863046 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:00.864161 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:00.864253 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:00.948209 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:00.948283 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:01.834151 18634 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.26:43550) I0123 01:10:01.834774 18634 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "airwatch_console", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "airwatch-admin-ui.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:01.864948 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:01.865088 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:01.875557 8549 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.26:33597) I0123 01:10:01.876303 8549 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "pa__streaming_batch", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "ph_streaming_etl.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:01.910138 12623 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.22:38761) I0123 01:10:01.911034 12623 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "itfm_ui", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vrb_ui.7_2_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:01.983722 4162 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.22:58692) I0123 01:10:01.984679 4162 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "astro_ui", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "astro.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:02.195271 11602 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.25:52901) I0123 01:10:02.196039 11602 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "esxi_hostinfo", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "hostclient.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:02.214392 12831 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.26:37155) I0123 01:10:02.215291 12831 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "h5_ui_errors", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vsphere_h5c.6_5", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:02.801933 6037 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.21:37512) I0123 01:10:02.802866 6037 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "esxi_hostinfo", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "hostclient.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:02.866295 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:02.866431 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:02.948673 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:02.948791 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:03.351763 10057 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:57413) I0123 01:10:03.352073 10057 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "airwatch_console", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "airwatch-admin-ui.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:03.867005 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:03.867141 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:04.867760 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:04.867916 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:04.949213 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:04.949308 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:05.867581 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:05.867689 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:06.867902 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:06.868007 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:06.949877 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:06.949997 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:07.868690 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:07.868793 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:08.868562 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:08.868687 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:08.950388 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:08.950481 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:09.868513 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:09.868710 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:10.868468 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:10.868566 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:10.950865 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:10.950991 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:11.868793 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:11.868926 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:11.987391 19147 webserver.cc:417] Rendering page /jsonmetrics took 1251.63K clock cycles I0123 01:10:12.868939 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:12.869048 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:12.951458 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:12.951552 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:13.870100 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:13.870226 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:14.871160 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:14.871259 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:14.952044 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:14.952239 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:15.871579 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:15.871922 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:16.872189 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:16.872293 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:16.952514 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:16.952630 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:17.873426 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:17.873524 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:18.873528 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:18.873646 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:18.953090 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:18.953184 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:19.502084 11222 CatalogServiceCatalog.java:200] Reloading cache pool names from HDFS I0123 01:10:19.574285 18966 HdfsTable.java:441] Loading disk ids for: history.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:19.874701 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:19.874913 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:20.028619 14317 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 01:10:20.028821 14317 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.19:35818 took 35s759ms I0123 01:10:20.029954 18634 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.airwatch_console pa__arrival_day=1485129600/pa__collector_id=airwatch-admin-ui.1_0/pa__schema_version=1 I0123 01:10:20.031401 8549 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.pa__streaming_batch pa__arrival_day=1485129600/pa__collector_id=ph_streaming_etl.1_0/pa__schema_version=1 I0123 01:10:20.033116 11602 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.esxi_hostinfo pa__arrival_day=1485129600/pa__collector_id=hostclient.1_0/pa__schema_version=1 I0123 01:10:20.033298 4162 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.astro_ui pa__arrival_day=1485129600/pa__collector_id=astro.1_0/pa__schema_version=1 I0123 01:10:20.034054 12831 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.h5_ui_errors pa__arrival_day=1485129600/pa__collector_id=vsphere_h5c.6_5/pa__schema_version=1 I0123 01:10:20.038810 12623 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.itfm_ui pa__arrival_day=1485129600/pa__collector_id=vrb_ui.7_2_0/pa__schema_version=1 I0123 01:10:20.040112 10057 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.airwatch_console pa__arrival_day=1485129600/pa__collector_id=airwatch-admin-ui.1_0/pa__schema_version=1 I0123 01:10:20.040431 6037 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.esxi_hostinfo pa__arrival_day=1485129600/pa__collector_id=hostclient.1_0/pa__schema_version=1 I0123 01:10:20.068912 7181 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.19:36786) I0123 01:10:20.069811 7181 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 3, 06: create_table_params (struct) = TCreateTableParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "ph_dow_20170123_010826_bundle", }, 02: columns (list) = list[21] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [14] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [16] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [18] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [19] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, [20] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Inferred from Parquet file.", }, }, 04: file_format (i32) = 4, 05: is_external (bool) = true, 06: if_not_exists (bool) = false, 07: owner (string) = "phanalytics@PHONEHOME.VMWARE.COM", 08: row_format (struct) = TTableRowFormat { }, 10: location (string) = "hdfs://ph-hdp-prd-nn01:8020/user/etl/staging/production__snapshot/parquet/ph_downloader.1_0/bundle", }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics@PHONEHOME.VMWARE.COM", }, } I0123 01:10:20.070441 7181 CatalogOpExecutor.java:1367] Creating table staging.ph_dow_20170123_010826_bundle I0123 01:10:20.093364 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000 I0123 01:10:20.094295 4162 HdfsTable.java:348] load block md for astro_ui file part-00000 I0123 01:10:20.097337 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_1 I0123 01:10:20.097540 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001 I0123 01:10:20.099306 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000 I0123 01:10:20.099486 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_1 I0123 01:10:20.099669 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000 I0123 01:10:20.099982 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_10 I0123 01:10:20.100265 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_1 I0123 01:10:20.100970 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001 I0123 01:10:20.101143 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000 I0123 01:10:20.101537 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001 I0123 01:10:20.101711 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_11 I0123 01:10:20.102116 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_1 I0123 01:10:20.102396 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_10 I0123 01:10:20.102675 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_1 I0123 01:10:20.107076 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_1 I0123 01:10:20.107201 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_11 I0123 01:10:20.107296 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_1 I0123 01:10:20.107483 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_10 I0123 01:10:20.107758 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_10 I0123 01:10:20.108008 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_12 I0123 01:10:20.108407 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_1 I0123 01:10:20.108779 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_10 I0123 01:10:20.109287 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_11 I0123 01:10:20.109767 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_13 I0123 01:10:20.110671 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_12 I0123 01:10:20.111232 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_10 I0123 01:10:20.111424 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_10 I0123 01:10:20.111618 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_14 I0123 01:10:20.111742 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_11 I0123 01:10:20.111866 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_10 I0123 01:10:20.111984 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_11 I0123 01:10:20.112105 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_12 I0123 01:10:20.112196 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_13 I0123 01:10:20.112319 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_11 I0123 01:10:20.112726 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_11 I0123 01:10:20.113270 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_15 I0123 01:10:20.113366 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_12 I0123 01:10:20.113473 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_11 I0123 01:10:20.113955 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_13 I0123 01:10:20.114430 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_12 I0123 01:10:20.114959 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_12 I0123 01:10:20.115353 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_14 I0123 01:10:20.115483 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_12 I0123 01:10:20.115617 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_12 I0123 01:10:20.115777 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_13 I0123 01:10:20.116024 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_16 I0123 01:10:20.116569 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_14 I0123 01:10:20.117105 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_13 I0123 01:10:20.117197 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_13 I0123 01:10:20.117472 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_14 I0123 01:10:20.117696 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_17 I0123 01:10:20.118052 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_13 I0123 01:10:20.118221 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_15 I0123 01:10:20.118510 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_14 I0123 01:10:20.118626 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_15 I0123 01:10:20.118840 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_13 I0123 01:10:20.118958 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_15 I0123 01:10:20.119065 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_14 I0123 01:10:20.119295 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_18 I0123 01:10:20.119616 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_14 I0123 01:10:20.120559 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_16 I0123 01:10:20.120764 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_15 I0123 01:10:20.121155 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_16 I0123 01:10:20.121259 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_14 I0123 01:10:20.121373 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_15 I0123 01:10:20.121454 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_15 I0123 01:10:20.121709 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_17 I0123 01:10:20.121841 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_16 I0123 01:10:20.121971 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_19 I0123 01:10:20.122241 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_16 I0123 01:10:20.122650 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_17 I0123 01:10:20.122805 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_16 I0123 01:10:20.123414 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_15 I0123 01:10:20.123535 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_18 I0123 01:10:20.123859 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_16 I0123 01:10:20.124367 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_17 I0123 01:10:20.124835 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_2 I0123 01:10:20.125224 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_17 I0123 01:10:20.125671 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_17 I0123 01:10:20.126271 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_18 I0123 01:10:20.126680 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_19 I0123 01:10:20.127094 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_18 I0123 01:10:20.127192 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_16 I0123 01:10:20.127323 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_17 I0123 01:10:20.127467 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_18 I0123 01:10:20.127622 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_20 I0123 01:10:20.128047 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_19 I0123 01:10:20.128336 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_2 I0123 01:10:20.128453 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_18 I0123 01:10:20.128779 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_19 I0123 01:10:20.129143 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_21 I0123 01:10:20.129462 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_17 I0123 01:10:20.129813 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_19 I0123 01:10:20.130067 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_18 I0123 01:10:20.130826 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_2 I0123 01:10:20.131196 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_19 I0123 01:10:20.131352 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_2 I0123 01:10:20.131490 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_20 I0123 01:10:20.131656 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_18 I0123 01:10:20.131822 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_22 I0123 01:10:20.132454 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_2 I0123 01:10:20.133312 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_19 I0123 01:10:20.133499 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_20 I0123 01:10:20.133648 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_20 I0123 01:10:20.133826 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_2 I0123 01:10:20.134053 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_19 I0123 01:10:20.134244 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_21 I0123 01:10:20.134399 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_23 I0123 01:10:20.134958 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_3 I0123 01:10:20.135301 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_2 I0123 01:10:20.135509 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_21 I0123 01:10:20.135668 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_20 I0123 01:10:20.135882 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_21 I0123 01:10:20.136096 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_24 I0123 01:10:20.136330 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_22 I0123 01:10:20.137120 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_4 I0123 01:10:20.137763 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_2 I0123 01:10:20.137949 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_22 I0123 01:10:20.138140 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_21 I0123 01:10:20.138327 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_25 I0123 01:10:20.138531 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_20 I0123 01:10:20.138842 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_5 I0123 01:10:20.139109 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_22 I0123 01:10:20.139518 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_23 I0123 01:10:20.139963 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_20 I0123 01:10:20.140184 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_23 I0123 01:10:20.140347 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_22 I0123 01:10:20.140489 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_23 I0123 01:10:20.140779 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_21 I0123 01:10:20.140995 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_26 I0123 01:10:20.141286 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_6 I0123 01:10:20.141912 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_24 I0123 01:10:20.144243 7181 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43408, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43408, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "ph_dow_20170123_010826_bundle", 04: id (i32) = 18765, }, }, }, } I0123 01:10:20.144594 7181 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.19:36786 took 75.000ms I0123 01:10:20.145023 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_21 I0123 01:10:20.145264 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_23 I0123 01:10:20.145555 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_7 I0123 01:10:20.145831 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_3 I0123 01:10:20.146026 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_27 I0123 01:10:20.146179 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_24 I0123 01:10:20.146309 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_25 I0123 01:10:20.146467 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_22 I0123 01:10:20.146703 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_24 I0123 01:10:20.146883 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_24 I0123 01:10:20.147455 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_8 I0123 01:10:20.148243 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_4 I0123 01:10:20.148402 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_23 I0123 01:10:20.148550 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_25 I0123 01:10:20.148799 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_25 I0123 01:10:20.148965 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_28 I0123 01:10:20.149233 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_25 I0123 01:10:20.149930 18634 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_9 I0123 01:10:20.150771 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_24 I0123 01:10:20.150945 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_5 I0123 01:10:20.151067 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_26 I0123 01:10:20.151212 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_26 I0123 01:10:20.151365 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_29 I0123 01:10:20.151764 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_26 I0123 01:10:20.152024 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_26 I0123 01:10:20.152189 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_25 I0123 01:10:20.152406 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_6 I0123 01:10:20.152884 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_27 I0123 01:10:20.153023 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_27 I0123 01:10:20.153380 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_3 I0123 01:10:20.153808 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_27 I0123 01:10:20.154014 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_26 I0123 01:10:20.154196 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_7 I0123 01:10:20.154441 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_27 I0123 01:10:20.154595 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_28 I0123 01:10:20.154805 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_28 I0123 01:10:20.154980 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_30 I0123 01:10:20.155602 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_28 I0123 01:10:20.155776 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_27 I0123 01:10:20.156678 18634 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.157050 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_28 I0123 01:10:20.157196 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_29 I0123 01:10:20.157380 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_31 I0123 01:10:20.157562 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_29 I0123 01:10:20.157718 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_8 I0123 01:10:20.157954 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_28 I0123 01:10:20.158315 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_29 I0123 01:10:20.158609 18634 HdfsTable.java:441] Loading disk ids for: history.airwatch_console. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.159337 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_29 I0123 01:10:20.159557 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_32 I0123 01:10:20.159705 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_3 I0123 01:10:20.160189 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_3 I0123 01:10:20.160305 6037 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_9 I0123 01:10:20.160688 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_29 I0123 01:10:20.160917 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_3 I0123 01:10:20.161496 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_3 I0123 01:10:20.161712 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_33 I0123 01:10:20.161872 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_30 I0123 01:10:20.161970 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_30 I0123 01:10:20.162094 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_3 I0123 01:10:20.162649 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_30 I0123 01:10:20.162817 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_30 I0123 01:10:20.163009 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_34 I0123 01:10:20.163303 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_31 I0123 01:10:20.164232 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_30 I0123 01:10:20.164466 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_31 I0123 01:10:20.164654 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_31 I0123 01:10:20.164894 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_31 I0123 01:10:20.165045 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_35 I0123 01:10:20.165731 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_32 I0123 01:10:20.165890 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_31 I0123 01:10:20.166071 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_32 I0123 01:10:20.166188 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_32 I0123 01:10:20.167086 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_32 I0123 01:10:20.167577 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_36 I0123 01:10:20.169525 6037 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.169942 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_37 I0123 01:10:20.170171 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_33 I0123 01:10:20.170331 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_32 I0123 01:10:20.170565 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_33 I0123 01:10:20.170723 6037 HdfsTable.java:441] Loading disk ids for: history_staging.esxi_hostinfo. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.170907 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_33 I0123 01:10:20.171136 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_33 I0123 01:10:20.171275 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_38 I0123 01:10:20.171547 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_4 I0123 01:10:20.171821 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_34 I0123 01:10:20.171908 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_34 I0123 01:10:20.172196 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_34 I0123 01:10:20.172663 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_34 I0123 01:10:20.173015 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_5 I0123 01:10:20.173146 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_39 I0123 01:10:20.173259 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_35 I0123 01:10:20.173487 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_35 I0123 01:10:20.173667 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_35 I0123 01:10:20.173789 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_35 I0123 01:10:20.174192 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_6 I0123 01:10:20.174700 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_4 I0123 01:10:20.174820 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_36 I0123 01:10:20.174978 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_36 I0123 01:10:20.175369 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_36 I0123 01:10:20.176399 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_7 I0123 01:10:20.176654 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_40 I0123 01:10:20.176791 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_37 I0123 01:10:20.176926 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_36 I0123 01:10:20.177038 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_37 I0123 01:10:20.177568 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_37 I0123 01:10:20.177781 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_8 I0123 01:10:20.178398 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_41 I0123 01:10:20.179028 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_38 I0123 01:10:20.179226 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_38 I0123 01:10:20.179731 12623 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_9 I0123 01:10:20.180197 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_38 I0123 01:10:20.180356 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_42 I0123 01:10:20.182157 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_37 I0123 01:10:20.190577 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_39 I0123 01:10:20.192270 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_39 I0123 01:10:20.192672 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_39 I0123 01:10:20.192941 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_4 I0123 01:10:20.193044 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_43 I0123 01:10:20.193240 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_38 I0123 01:10:20.193749 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_4 I0123 01:10:20.193996 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_4 I0123 01:10:20.194798 12623 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.195082 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_44 I0123 01:10:20.195291 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_40 I0123 01:10:20.195649 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_39 I0123 01:10:20.195747 12623 HdfsTable.java:441] Loading disk ids for: history.itfm_ui. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.195901 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_40 I0123 01:10:20.196444 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_40 I0123 01:10:20.196620 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_45 I0123 01:10:20.196997 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_41 I0123 01:10:20.197204 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_4 I0123 01:10:20.197505 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_41 I0123 01:10:20.197715 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_41 I0123 01:10:20.198137 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_46 I0123 01:10:20.199527 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_42 I0123 01:10:20.199635 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_40 I0123 01:10:20.199748 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_42 I0123 01:10:20.200213 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_42 I0123 01:10:20.200412 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_47 I0123 01:10:20.201068 18634 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43400, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43400, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "airwatch_console", 04: id (i32) = 4269, 05: access_level (i32) = 1, 06: columns (list) = list[46] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "ID", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 4594, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "User\\'s first visit time (epoch time, seconds)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 233, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Time recorded when the request is received by the web server", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "OS used (from the user agent string)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6.4365253448486328, 02: max_size (i64) = 23, 03: num_distinct_values (i64) = 8, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = "Whether the related bundle is external, i.e. came from outside VMWare network.\nFor snapshot, this is the is external value of the last bundle that contributed to that record.\nComments:\nFor on-prem products, pa__is_external is a good estimate of whether ...", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Version of browser used (from the user agent string)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.360701560974121, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 33, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Second of event", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8310225009918213, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Client ID - Identifies physical client regardless of user logged in. When used in conjunction with _idvc, together they represent a session ID", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 233, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "link", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<.Column was created by collectorId airwatch-admin-ui.1_0. If you need more information find the owner of airwatch-admin-ui.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Indicates if the user has QuickTime enabled (0:No 1:Yes)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Flag indicating new user", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Indicates if the user has Google Gears enabled (0:No 1:Yes)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Resolution of screen", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 12.149532318115234, 02: max_size (i64) = 37, 03: num_distinct_values (i64) = 32, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Country from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.074090123176574707, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 4, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Random number (for caching reasons)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 4623, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Typically title of the page", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Indicates if the user has X Director enabled (0:No 1:Yes)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Indicates if the user has SiverLight enabled (0:No 1:Yes)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "e_c", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Event category", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "customer_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Obfuscated (one-way hashed) values of customer ID. There will be no way to use these values to identify the actual customer. This implies that no other system at VMware (e.g. GEM) will contain these IDs in the clear or with the same hash algorithm (so t...", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 64, 02: max_size (i64) = 64, 03: num_distinct_values (i64) = 147, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Minute of event", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8290728330612183, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "User visit count - This with _id provides a unique user session", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.6159012317657471, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 82, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "e_n", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Event name", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "download", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<.Column was created by collectorId airwatch-admin-ui.1_0. If you need more information find the owner of airwatch-admin-ui.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "The collector instance ID of the related bundle. Collector instance is a concrete deployment of a collector and its ID is specified in the client collector code before sending data to VMWare.\nFor snapshot, this is the ID of the last collector instance t...", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 64, 02: max_size (i64) = 64, 03: num_distinct_values (i64) = 16, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = "Represents the date and time when the record has been received by the Collection Platform.\nFor snapshot, this is the received time of the last payload that contributed to the record.\nComments:\nThis is the primary date/time value recommended to be used f...", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 4673, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Foreign key pointing to BUNDLE.ID. Links the record to metadata about the payload that contributed the record and allows grouping of data that came in the same payload for products in case this grouping is meaningful.\nFor snapshot, this is the ID of the...", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 4786, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Browser used (from the user agent string)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6.1923742294311523, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 6, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Client version to be used for continual deployments like Flings or SaaS applications. ", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7, 02: max_size (i64) = 7, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Version of OS used (from the user agent string)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 4.6553292274475098, 02: max_size (i64) = 7, 03: num_distinct_values (i64) = 21, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Hour of event (local time, on a 24 hour clock)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.942807674407959, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 19, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Indicates if the user accepts cookies (0:No 1:Yes)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "User ID - Identifies user currently logged in. Represented as SHA256 hash of provided ID.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 64, 02: max_size (i64) = 64, 03: num_distinct_values (i64) = 207, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "URL of the current page (IP address, host names and other sensitive or human entered data is removed)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 48.519279479980469, 02: max_size (i64) = 201, 03: num_distinct_values (i64) = 478, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Timestamp of referral", 04: col_stat I0123 01:10:20.201771 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_43 I0123 01:10:20.202033 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_43 I0123 01:10:20.202306 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_41 I0123 01:10:20.202714 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_48 I0123 01:10:20.202888 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_43 I0123 01:10:20.202920 18634 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.26:43550 took 18s369ms I0123 01:10:20.204044 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_44 I0123 01:10:20.204216 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_42 I0123 01:10:20.204329 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_49 I0123 01:10:20.204499 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_44 I0123 01:10:20.204752 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_44 I0123 01:10:20.205080 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_45 I0123 01:10:20.205835 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_43 I0123 01:10:20.206039 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_5 I0123 01:10:20.206565 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_45 I0123 01:10:20.206665 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_45 I0123 01:10:20.206792 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_46 I0123 01:10:20.206910 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_44 I0123 01:10:20.207063 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_50 I0123 01:10:20.207695 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_46 I0123 01:10:20.207811 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_46 I0123 01:10:20.208223 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_5 I0123 01:10:20.208346 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_45 I0123 01:10:20.208789 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_51 I0123 01:10:20.209190 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_47 I0123 01:10:20.209394 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_47 I0123 01:10:20.211762 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_6 I0123 01:10:20.215134 6037 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43406, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43406, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "esxi_hostinfo", 04: id (i32) = 4379, 05: access_level (i32) = 1, 06: columns (list) = list[19] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "locale", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "useragent", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "license", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "hostclientversion", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "esxversion", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "ismanagedbyvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "timestamp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 40, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/esxi_hostinfo", 02: colNames (list) = list[22] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "locale", [5] = "useragent", [6] = "license", [7] = "hostclientversion", [8] = "esxversion", [9] = "os", [10] = "ismanagedbyvc", [11] = "timestamp", [12] = "browser", [13] = "pa__collector_instance_id", [14] = "pa__is_external", [15] = "received_time", [16] = "state_from_ip", [17] = "country_from_ip", [18] = "pa__bundle__fk", [19] = "pa__arrival_ts", [20] = "pa__processed_ts", [21] = "_v", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[79] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 41745 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "6444458717c1a71f-d293484fca4e299_25338536_data.0.parq", 02: length (i64) = 4160, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891768128, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 4160, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 41745, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "4160", "transient_lastDdlTime" -> "1481891768", }, }, 41746 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "prototyping-only.v0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "974f44fff3d52a28-fa442ebbe699a790_1937429486_data.0.parq", 02: length (i64) = 3647, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891343051, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3647, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=prototyping-only.v0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 41746, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3647", "transient_lastDdlTime" -> "1481891343", }, }, 41747 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481846400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { I0123 01:10:20.217975 6037 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.21:37512 took 17s416ms I0123 01:10:20.220878 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_46 I0123 01:10:20.221592 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_7 I0123 01:10:20.221786 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_48 I0123 01:10:20.222301 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_48 I0123 01:10:20.222439 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_52 I0123 01:10:20.223201 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_8 I0123 01:10:20.223784 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_49 I0123 01:10:20.224153 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_49 I0123 01:10:20.224238 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_53 I0123 01:10:20.224406 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_47 I0123 01:10:20.224828 4162 HdfsTable.java:348] load block md for astro_ui file part-00000_copy_9 I0123 01:10:20.225661 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_5 I0123 01:10:20.226068 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_5 I0123 01:10:20.226163 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_54 I0123 01:10:20.226297 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_48 I0123 01:10:20.226893 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_50 I0123 01:10:20.227412 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_50 I0123 01:10:20.227633 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_49 I0123 01:10:20.228196 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_6 I0123 01:10:20.228492 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_51 I0123 01:10:20.228940 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_51 I0123 01:10:20.229383 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_5 I0123 01:10:20.230334 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_7 I0123 01:10:20.230485 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_52 I0123 01:10:20.230829 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_52 I0123 01:10:20.231604 4162 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.233175 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_53 I0123 01:10:20.233336 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_8 I0123 01:10:20.233481 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_50 I0123 01:10:20.233633 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_53 I0123 01:10:20.234155 4162 HdfsTable.java:441] Loading disk ids for: history_staging.astro_ui. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.234386 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_6 I0123 01:10:20.234813 8549 HdfsTable.java:348] load block md for pa__streaming_batch file part-00000_copy_9 I0123 01:10:20.235056 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_54 I0123 01:10:20.235277 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_51 I0123 01:10:20.235765 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_7 I0123 01:10:20.236315 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_6 I0123 01:10:20.236624 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_52 I0123 01:10:20.237018 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_8 I0123 01:10:20.237592 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_7 I0123 01:10:20.237840 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_53 I0123 01:10:20.238183 12831 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_9 I0123 01:10:20.238778 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_8 I0123 01:10:20.239480 8549 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.239778 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_54 I0123 01:10:20.239913 10057 HdfsTable.java:348] load block md for airwatch_console file part-00001_copy_9 I0123 01:10:20.240993 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_6 I0123 01:10:20.241221 8549 HdfsTable.java:441] Loading disk ids for: history.pa__streaming_batch. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.242339 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_7 I0123 01:10:20.243439 12831 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.243703 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_8 I0123 01:10:20.244555 10057 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.247498 11602 HdfsTable.java:348] load block md for esxi_hostinfo file part-00000_copy_9 I0123 01:10:20.247648 12831 HdfsTable.java:441] Loading disk ids for: history.h5_ui_errors. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.248381 10057 HdfsTable.java:441] Loading disk ids for: history_staging.airwatch_console. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.254714 11602 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:20.257536 11602 HdfsTable.java:441] Loading disk ids for: history.esxi_hostinfo. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.317821 4162 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43403, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43403, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "astro_ui", 04: id (i32) = 4150, 05: access_level (i32) = 1, 06: columns (list) = list[46] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "e_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "e_a", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "e_n", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "e_c", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, [39] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 42, }, [40] = TColumn { 01: columnName (string) = "link", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 43, }, [41] = TColumn { 01: columnName (string) = "pa__bundl I0123 01:10:20.323057 4162 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.22:58692 took 18s340ms I0123 01:10:20.371693 10057 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43407, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43407, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "airwatch_console", 04: id (i32) = 4152, 05: access_level (i32) = 1, 06: columns (list) = list[46] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, [39] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 42, }, [40] = TColumn { 01: columnName (string) = "e_n", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 43, }, [41] = TColumn { I0123 01:10:20.376477 10057 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:57413 took 17s025ms I0123 01:10:20.410879 14317 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.19:35818) I0123 01:10:20.412108 14317 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "ph_dow_20170123_010826_bundle", }, }, }, } I0123 01:10:20.413311 11206 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 01:10:20.413753 14317 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 01:10:20.413884 14317 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.19:35818 took 3.000ms I0123 01:10:20.413993 11691 TableLoader.java:59] Loading metadata for: staging.ph_dow_20170123_010826_bundle I0123 01:10:20.435794 11691 Table.java:161] Loading column stats for table: ph_dow_20170123_010826_bundle I0123 01:10:20.460156 12623 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43402, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43402, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "itfm_ui", 04: id (i32) = 4149, 05: access_level (i32) = 1, 06: columns (list) = list[39] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 227104, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2.9923999309539795, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 7769, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32.992713928222656, 02: max_size (i64) = 72, 03: num_distinct_values (i64) = 63, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 39.253799438476562, 02: max_size (i64) = 5143, 03: num_distinct_values (i64) = 1402, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 31034, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 216459, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8332446813583374, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8328707218170166, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.51788330078125, 02: max_size (i64) = 37, 03: num_distinct_values (i64) = 992, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.2742840051651001, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 150, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8606743812561035, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 2263, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 9294, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 50.097366333007812, 02: max_size (i64) = 5143, 03: num_distinct_values (i64) = 1175, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 9518, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8520394563674927, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 24, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 37.609352111816406, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 1805, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.930443286895752, 02: max_size (i64) = 14, 03: num_distinct_values (i64) = 12, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 37.235195159912109, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 240271, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 233770, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 232171, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8.61578369140625, 02: max_size (i64) = 19, 03: num_distinct_values (i64) = 12, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3.4931962490081787, 02: max_size (i64) = 7, 03: num_distinct_values (i64) = 32, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.1478557586669922, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 49036, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "user ID of the logged in user (hashed)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 15.979450225830078, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 634, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "State from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.46828320622444153, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 38, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Country from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.5372523069381714, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 60, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struc I0123 01:10:20.464599 11691 HdfsTable.java:1030] load table from Hive Metastore: staging.ph_dow_20170123_010826_bundle I0123 01:10:20.470338 11691 MetaStoreUtil.java:129] Fetching 0 partitions for: staging.ph_dow_20170123_010826_bundle using partition batch size: 1000 I0123 01:10:20.474567 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00000-2dfe83e6-85d1-457b-a2aa-84230d07e354.parquet I0123 01:10:20.476562 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00001-1194d7c3-99dc-4367-8033-0fd84d1b3a99.parquet I0123 01:10:20.478658 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00002-1f5d62c3-0769-4c11-8e6f-82f201e46363.parquet I0123 01:10:20.480273 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00003-32774308-7195-4b41-883d-424d321248a9.parquet I0123 01:10:20.483213 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00004-11dcc7b8-6b81-4733-8f1a-9d9cb2cc713d.parquet I0123 01:10:20.484896 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00005-e4939522-ad85-47ef-b45c-11222d930873.parquet I0123 01:10:20.487025 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00006-e0b77b13-716b-4522-8358-ba750c0eeb69.parquet I0123 01:10:20.489183 11691 HdfsTable.java:348] load block md for ph_dow_20170123_010826_bundle file bundle-r-00007-e04b6376-9101-4719-b9dc-29b7aa9c04da.parquet I0123 01:10:20.493441 11691 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:716) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 17 more I0123 01:10:20.495141 12623 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.22:38761 took 18s585ms I0123 01:10:20.498361 11691 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.loadAllPartitions(HdfsTable.java:722) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1035) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:982) at com.cloudera.impala.catalog.TableLoader.load(TableLoader.java:81) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:232) at com.cloudera.impala.catalog.TableLoadingMgr$2.call(TableLoadingMgr.java:229) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 16 more I0123 01:10:20.499408 11691 HdfsTable.java:441] Loading disk ids for: staging.ph_dow_20170123_010826_bundle. nodes: 11. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:20.523607 11602 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43404, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43404, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "esxi_hostinfo", 04: id (i32) = 4120, 05: access_level (i32) = 1, 06: columns (list) = list[19] { [0] = TColumn { 01: columnName (string) = "timestamp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 28.557889938354492, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 1054434, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16.234962463378906, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 1291, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 35.511093139648438, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 4069, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "license", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 24.660545349121094, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "hostclientversion", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7.005681037902832, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 460, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "ismanagedbyvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 4.9148778915405273, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 107, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "esxversion", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 31.055925369262695, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 1159, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 1065331, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "locale", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.0062546730041504, 02: max_size (i64) = 2085, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "useragent", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 107.33876037597656, 02: max_size (i64) = 394, 03: num_distinct_values (i64) = 8828, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8.3359298706054688, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 63764, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 38.994274139404297, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 1096175, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 996132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 1126708, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Time recorded when the request is received by the web server", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "client version to be used for continual deployments like Flings or SaaS applications", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Country from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "State from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 1093263, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/esxi_hostinfo", 02: colNames (list) = list[22] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "timestamp", [4] = "browser", [5] = "os", [6] = "license", [7] = "hostclientversion", [8] = "ismanagedbyvc", [9] = "esxversion", [10] = "id", [11] = "locale", [12] = "useragent", [13] = "pa__is_external", [14] = "pa__collector_instance_id", [15] = "pa__bundle__fk", [16] = "pa__arrival_ts", [17] = "pa__processed_ts", [18] = "received_time", [19] = "_v", [20] = "country_from_ip", [21] = "state_from_ip", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[415] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 1488 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1447200000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c54f353f27e0a0c6-be12b2fbe11d7da8_1826909299_data.0.parq", 02: length (i64) = 2573, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1462372264915, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2573, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1447200000/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 1488, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYCABsTjA1zdGF0[...](1072)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "1", "rawDataSize" -> "-1", "totalSize" -> "2573", "transient_lastDdlTime" -> "1462372267", }, }, 1490 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1449014400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c54f353f27e0a0c6-be12b2fbe11d7dae_1015574440_data.0.parq", 02: length (i64) = 2695, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1462372264906, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2695, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1449014400/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 2, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 1490, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYEABsTjA1zdGF0[...](1100)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "2", "rawDataSize" -> "-1", "totalSize" -> "2695", "transient_lastDdlTime" -> "1462372267", }, }, 1492 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs ( I0123 01:10:20.562126 11602 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.25:52901 took 18s367ms I0123 01:10:20.567548 12831 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43405, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43405, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "h5_ui_errors", 04: id (i32) = 4139, 05: access_level (i32) = 1, 06: columns (list) = list[20] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 1203160, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "user", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.47408992052078247, 02: max_size (i64) = 68, 03: num_distinct_values (i64) = 530, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "message", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 449.67361450195312, 02: max_size (i64) = 1043850, 03: num_distinct_values (i64) = 34690, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 31, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 11.391100883483887, 02: max_size (i64) = 111, 03: num_distinct_values (i64) = 991, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "client_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 4.7701358795166016, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 1371, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 133.04139709472656, 02: max_size (i64) = 738, 03: num_distinct_values (i64) = 64362, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 2865, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "stack_trace", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5949.24267578125, 02: max_size (i64) = 154967, 03: num_distinct_values (i64) = 25787, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "creation_date", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13.024072647094727, 02: max_size (i64) = 62, 03: num_distinct_values (i64) = 1050405, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 24.760128021240234, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 13159, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "product", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16.999979019165039, 02: max_size (i64) = 17, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 26, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "rating", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "ua", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 104.68424987792969, 02: max_size (i64) = 274, 03: num_distinct_values (i64) = 4757, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.251823425292969, 02: max_size (i64) = 41, 03: num_distinct_values (i64) = 1252171, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 903572, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 1177016, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 1262115, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/h5_ui_errors", 02: colNames (list) = list[23] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "user", [5] = "message", [6] = "state_from_ip", [7] = "version", [8] = "client_id", [9] = "url", [10] = "received_time", [11] = "stack_trace", [12] = "pa__is_external", [13] = "creation_date", [14] = "pa__collector_instance_id", [15] = "product", [16] = "country_from_ip", [17] = "rating", [18] = "ua", [19] = "pa__bundle__fk", [20] = "pa__arrival_ts", [21] = "pa__processed_ts", [22] = "_v", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[421] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 12833 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1457308800, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsphere_h5.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "4d4949b717c92da7-832297456f4a319b_1207033728_data.0.parq", 02: length (i64) = 4653, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461921384582, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 4653, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1457308800/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 11, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 12833, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYWABsUjA1zdGF0[...](1396)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "11", "rawDataSize" -> "-1", "totalSize" -> "4653", "transient_lastDdlTime" -> "1461921385", }, }, 12835 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1457395200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsphere_h5.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "4d4949b717c92da7-832297456f4a3197_116194188_data.0.parq", 02: length (i64) = 63729, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461921384579, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 63729, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1457395200/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 420, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 12835, 15: hms_parameters (map) = map[10] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBbIBgAbFIwNc3Rh[...](4000)", "impala_intermediate_stats_chunk1" -> "AAMAAQAAAQACAAAC[...](4000)", "impala_intermediate_stats_chunk2" -> "PxbIBgACX3YYCP8A[...](460)", "impala_intermediate_stats_num_chunks" -> "3", "numFiles" -> "1", "numRows" -> "420", "rawDataSi I0123 01:10:20.602327 12831 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.26:37155 took 18s388ms I0123 01:10:20.641695 8549 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43401, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43401, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "pa__streaming_batch", 04: id (i32) = 4148, 05: access_level (i32) = 1, 06: columns (list) = list[16] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "ID", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 718203, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "batch_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 249023, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "written_tables", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 133.85873413085938, 02: max_size (i64) = 4111, 03: num_distinct_values (i64) = 1174, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "total_delay", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 123077, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "processing_end_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 740844, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "scheduling_delay", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 26034, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "processing_delay", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 99728, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.077564239501953, 02: max_size (i64) = 41, 03: num_distinct_values (i64) = 719057, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 702705, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "total_messages", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1377, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "submission_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 611822, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "processing_start_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 634513, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 743555, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "streaming_collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13.740486145019531, 02: max_size (i64) = 52, 03: num_distinct_values (i64) = 237, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = ">>Enter description here<<. Column was created by collectorId ph_streaming_etl.1_0. If you need more information find the owner of ph_streaming_etl.1_0 here: https://wiki.eng.vmware.com/PhoneHome/Platform/10/AdoptionStatus.", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7.5838623046875, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 4, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Unix timestamp when this row arrived at VMWare, truncated to day", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "ID of the collector that contributed to the given row", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: comment (string) = "Schema version. Reserved for future use", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 754916, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/pa__streaming_batch", 02: colNames (list) = list[19] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "batch_time", [5] = "written_tables", [6] = "pa__is_external", [7] = "total_delay", [8] = "processing_end_time", [9] = "scheduling_delay", [10] = "processing_delay", [11] = "pa__bundle__fk", [12] = "pa__arrival_ts", [13] = "total_messages", [14] = "submission_time", [15] = "processing_start_time", [16] = "pa__processed_ts", [17] = "streaming_collector_id", [18] = "pa__collector_instance_id", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[213] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 14415 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1466553600, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "ph_streaming_etl.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "8848377508669f75-5846ef772badc993_299755300_data.0.parq", 02: length (i64) = 9993, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1466786253705, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 9993, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 0, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1466553600/pa__collector_id=ph_streaming_etl.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 52, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 14415, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBZoABsQjA5wYV9f[...](3752)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "52", "rawDataSize" -> "-1", "totalSize" -> "9993", "transient_lastDdlTime" -> "1466786253", }, }, 14416 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1466640000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "ph_streaming_etl.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "8848377508669f75-5846ef772badc995_1570211156_data.0.parq", 02: length (i64) = 23152, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1466786253709, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 23152, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = I0123 01:10:20.677780 8549 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.26:33597 took 18s802ms I0123 01:10:20.875123 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:20.875223 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:20.953657 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:20.953853 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:21.222101 11522 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:33385) I0123 01:10:21.222973 11522 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "h5_ui_errors", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vsphere_h5.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:21.225841 11522 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.h5_ui_errors pa__arrival_day=1485129600/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1 I0123 01:10:21.237725 12621 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.22:38759) I0123 01:10:21.241904 12621 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "itfm_ui", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vrb_ui.7_1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:21.243675 12621 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.itfm_ui pa__arrival_day=1485129600/pa__collector_id=vrb_ui.7_1_0/pa__schema_version=1 I0123 01:10:21.288323 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001 I0123 01:10:21.290977 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_1 I0123 01:10:21.292354 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_10 I0123 01:10:21.293527 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_11 I0123 01:10:21.294114 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000 I0123 01:10:21.294713 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_12 I0123 01:10:21.295421 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_1 I0123 01:10:21.296105 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_13 I0123 01:10:21.297083 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_10 I0123 01:10:21.297617 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_14 I0123 01:10:21.299026 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_15 I0123 01:10:21.299325 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_11 I0123 01:10:21.300839 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_2 I0123 01:10:21.301128 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_12 I0123 01:10:21.302083 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_3 I0123 01:10:21.302479 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_13 I0123 01:10:21.303869 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_4 I0123 01:10:21.304107 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_14 I0123 01:10:21.305593 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_5 I0123 01:10:21.306114 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_15 I0123 01:10:21.306974 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_6 I0123 01:10:21.307334 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_16 I0123 01:10:21.308796 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_7 I0123 01:10:21.309144 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_17 I0123 01:10:21.311326 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_18 I0123 01:10:21.311662 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_8 I0123 01:10:21.313606 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_19 I0123 01:10:21.314234 12621 HdfsTable.java:348] load block md for itfm_ui file part-00001_copy_9 I0123 01:10:21.315713 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_2 I0123 01:10:21.317018 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_20 I0123 01:10:21.318624 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_21 I0123 01:10:21.319738 12621 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:21.321552 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_22 I0123 01:10:21.322751 12621 HdfsTable.java:441] Loading disk ids for: history.itfm_ui. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:21.323078 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_23 I0123 01:10:21.324524 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_24 I0123 01:10:21.325624 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_25 I0123 01:10:21.327134 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_26 I0123 01:10:21.328655 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_27 I0123 01:10:21.329951 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_28 I0123 01:10:21.331298 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_29 I0123 01:10:21.332387 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_3 I0123 01:10:21.333611 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_30 I0123 01:10:21.335227 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_31 I0123 01:10:21.337013 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_32 I0123 01:10:21.338436 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_33 I0123 01:10:21.339869 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_34 I0123 01:10:21.341809 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_35 I0123 01:10:21.343205 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_36 I0123 01:10:21.344466 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_37 I0123 01:10:21.345688 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_38 I0123 01:10:21.346827 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_39 I0123 01:10:21.348357 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_4 I0123 01:10:21.349885 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_40 I0123 01:10:21.351179 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_41 I0123 01:10:21.352601 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_42 I0123 01:10:21.353966 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_43 I0123 01:10:21.355387 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_44 I0123 01:10:21.356585 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_45 I0123 01:10:21.357728 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_46 I0123 01:10:21.359045 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_47 I0123 01:10:21.361027 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_48 I0123 01:10:21.363391 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_49 I0123 01:10:21.364657 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_5 I0123 01:10:21.365949 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_50 I0123 01:10:21.367475 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_6 I0123 01:10:21.368644 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_7 I0123 01:10:21.369698 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_8 I0123 01:10:21.370813 11522 HdfsTable.java:348] load block md for h5_ui_errors file part-00000_copy_9 I0123 01:10:21.377048 11522 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:21.378571 11522 HdfsTable.java:441] Loading disk ids for: history.h5_ui_errors. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:21.860018 4102 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.21:49860) I0123 01:10:21.860353 4102 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "esxi_crashreport", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "hostclient.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:21.876044 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:21.876163 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:21.878985 4162 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.22:58692) I0123 01:10:21.879541 4162 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "astro.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:22.187422 20676 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.25:58551) I0123 01:10:22.189353 20676 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "esxi_crashreport", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "hostclient.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:22.196254 4102 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.esxi_crashreport pa__arrival_day=1485129600/pa__collector_id=hostclient.1_0/pa__schema_version=1 I0123 01:10:22.196835 4162 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.bundle pa__arrival_day=1485129600/pa__collector_id=astro.1_0/pa__schema_version=1 I0123 01:10:22.197363 20676 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.esxi_crashreport pa__arrival_day=1485129600/pa__collector_id=hostclient.1_0/pa__schema_version=1 I0123 01:10:22.220599 1914 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.26:54901) I0123 01:10:22.224607 1914 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "h5_ui", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vsphere_h5c.6_5", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:22.228200 1914 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.h5_ui pa__arrival_day=1485129600/pa__collector_id=vsphere_h5c.6_5/pa__schema_version=1 I0123 01:10:22.241066 4162 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:10:22.242951 4102 HdfsTable.java:348] load block md for esxi_crashreport file part-00001 I0123 01:10:22.243402 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:10:22.243599 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001 I0123 01:10:22.244107 4102 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_1 I0123 01:10:22.244827 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:10:22.244947 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_1 I0123 01:10:22.245292 4102 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_2 I0123 01:10:22.245807 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:10:22.246304 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_10 I0123 01:10:22.246992 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:10:22.247485 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_11 I0123 01:10:22.248234 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:10:22.248687 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_12 I0123 01:10:22.249368 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:10:22.249775 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_13 I0123 01:10:22.250345 4102 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:22.252495 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_14 I0123 01:10:22.252666 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:10:22.253216 4102 HdfsTable.java:441] Loading disk ids for: history_staging.esxi_crashreport. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:22.253842 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_15 I0123 01:10:22.253953 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:10:22.255043 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_16 I0123 01:10:22.255153 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:10:22.256433 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_17 I0123 01:10:22.256745 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:10:22.257624 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_18 I0123 01:10:22.257967 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:10:22.258545 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_19 I0123 01:10:22.258939 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:10:22.259774 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_2 I0123 01:10:22.259977 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:10:22.260767 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_20 I0123 01:10:22.261299 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:10:22.261710 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_21 I0123 01:10:22.263257 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:10:22.263392 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_22 I0123 01:10:22.264660 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:10:22.265041 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_23 I0123 01:10:22.265769 1914 HdfsTable.java:348] load block md for h5_ui file part-00000 I0123 01:10:22.266109 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:10:22.266293 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_24 I0123 01:10:22.266856 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_1 I0123 01:10:22.267344 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:10:22.267669 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_25 I0123 01:10:22.267968 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_10 I0123 01:10:22.268491 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:10:22.268859 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_26 I0123 01:10:22.268966 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_11 I0123 01:10:22.269793 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_27 I0123 01:10:22.269951 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:10:22.270413 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_12 I0123 01:10:22.270747 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_28 I0123 01:10:22.271476 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:10:22.271755 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_29 I0123 01:10:22.271903 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_13 I0123 01:10:22.272790 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:10:22.272894 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_3 I0123 01:10:22.273197 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_14 I0123 01:10:22.273869 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:10:22.274518 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_30 I0123 01:10:22.274677 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_15 I0123 01:10:22.274808 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:10:22.275575 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_31 I0123 01:10:22.276278 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_16 I0123 01:10:22.276484 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:10:22.276823 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_32 I0123 01:10:22.277199 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_17 I0123 01:10:22.277590 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:10:22.277932 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_33 I0123 01:10:22.278198 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_18 I0123 01:10:22.279479 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_34 I0123 01:10:22.279598 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:10:22.279714 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_19 I0123 01:10:22.280799 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_35 I0123 01:10:22.280982 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:10:22.281215 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_2 I0123 01:10:22.281800 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_36 I0123 01:10:22.282099 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:10:22.282435 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_20 I0123 01:10:22.282982 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_37 I0123 01:10:22.283217 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:10:22.283632 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_21 I0123 01:10:22.284198 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_38 I0123 01:10:22.284366 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:10:22.284775 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_22 I0123 01:10:22.285429 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_39 I0123 01:10:22.285621 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:10:22.286006 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_23 I0123 01:10:22.286447 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_4 I0123 01:10:22.287026 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:10:22.287407 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_40 I0123 01:10:22.287513 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_24 I0123 01:10:22.288110 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:10:22.288537 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_41 I0123 01:10:22.288779 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_25 I0123 01:10:22.288777 4102 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43412, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43412, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "esxi_crashreport", 04: id (i32) = 7366, 05: access_level (i32) = 1, 06: columns (list) = list[15] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "cause", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "count", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "timestamp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "stack", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__download_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 38, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/esxi_crashreport", 02: colNames (list) = list[18] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "cause", [5] = "count", [6] = "timestamp", [7] = "stack", [8] = "pa__is_external", [9] = "pa__collector_instance_id", [10] = "state_from_ip", [11] = "country_from_ip", [12] = "received_time", [13] = "pa__bundle__fk", [14] = "pa__arrival_ts", [15] = "pa__processed_ts", [16] = "pa__download_day", [17] = "_v", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[56] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 238133 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481760000, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "prototyping-only.v0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "134269f176655ff0-3883cbef58c5b7c0_2040524579_data.0.parq", 02: length (i64) = 2930, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1481891330315, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2930, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481760000/pa__collector_id=prototyping-only.v0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 238133, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "2930", "transient_lastDdlTime" -> "1481891330", }, }, 238134 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1481932800, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "prototyping-only.v0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "942a8608338e9f3-91ff249b00000000_490377550_data.0.parq", 02: length (i64) = 9152, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482175553726, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 9152, 03: replica_host_idxs (list) = list[3] { [0] = 1, [1] = 3, [2] = 4, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 2, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1481932800/pa__collector_id=prototyping-only.v0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 238134, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "9152", "transient_lastDdlTime" -> "1482175553", }, }, 238135 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1482019200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "prototyping-only.v0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "942a8608338e9f3-91ff249b00000000_1902167212_data.0.parq", 02: length (i64) = 3395, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482175553693, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3395, 03: replica_host_idxs (list) = list[3] { [0] = 1, [1] = 5, [2] = 6, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 2, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { I0123 01:10:22.289374 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:10:22.289818 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_42 I0123 01:10:22.289929 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_26 I0123 01:10:22.290156 4102 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.21:49860 took 430.000ms I0123 01:10:22.290932 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:10:22.291389 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_43 I0123 01:10:22.291556 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_27 I0123 01:10:22.292071 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:10:22.292765 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_44 I0123 01:10:22.292976 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_28 I0123 01:10:22.293392 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:10:22.293807 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_45 I0123 01:10:22.294239 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_29 I0123 01:10:22.294775 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:10:22.295106 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_46 I0123 01:10:22.295349 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_3 I0123 01:10:22.295938 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:10:22.296548 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_30 I0123 01:10:22.296936 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_47 I0123 01:10:22.297297 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:10:22.297749 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_31 I0123 01:10:22.298053 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_48 I0123 01:10:22.298596 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:10:22.299104 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_32 I0123 01:10:22.299443 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_5 I0123 01:10:22.299664 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:10:22.300160 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_33 I0123 01:10:22.300777 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:10:22.301306 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_6 I0123 01:10:22.301795 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_34 I0123 01:10:22.302196 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:10:22.302438 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_7 I0123 01:10:22.303259 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_35 I0123 01:10:22.303943 4162 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:10:22.304138 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_8 I0123 01:10:22.304790 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_36 I0123 01:10:22.305503 20676 HdfsTable.java:348] load block md for esxi_crashreport file part-00001_copy_9 I0123 01:10:22.305995 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_37 I0123 01:10:22.307091 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_38 I0123 01:10:22.308389 4162 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:22.308609 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_39 I0123 01:10:22.309372 20676 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:22.310142 4162 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:22.310760 20676 HdfsTable.java:441] Loading disk ids for: history.esxi_crashreport. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:22.311133 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_4 I0123 01:10:22.312141 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_40 I0123 01:10:22.313606 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_41 I0123 01:10:22.314923 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_42 I0123 01:10:22.317243 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_43 I0123 01:10:22.318470 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_44 I0123 01:10:22.319520 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_45 I0123 01:10:22.320608 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_46 I0123 01:10:22.322065 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_47 I0123 01:10:22.323633 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_48 I0123 01:10:22.324610 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_49 I0123 01:10:22.325541 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_5 I0123 01:10:22.326614 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_50 I0123 01:10:22.327531 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_51 I0123 01:10:22.328536 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_52 I0123 01:10:22.329464 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_53 I0123 01:10:22.333598 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_54 I0123 01:10:22.335731 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_6 I0123 01:10:22.337561 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_7 I0123 01:10:22.339447 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_8 I0123 01:10:22.342231 1914 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_9 I0123 01:10:22.343430 12621 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43411, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43411, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "itfm_ui", 04: id (i32) = 4149, 05: access_level (i32) = 1, 06: columns (list) = list[39] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 227104, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2.9923999309539795, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 7769, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32.992713928222656, 02: max_size (i64) = 72, 03: num_distinct_values (i64) = 63, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 39.253799438476562, 02: max_size (i64) = 5143, 03: num_distinct_values (i64) = 1402, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 31034, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 216459, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8332446813583374, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8328707218170166, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.51788330078125, 02: max_size (i64) = 37, 03: num_distinct_values (i64) = 992, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.2742840051651001, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 150, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8606743812561035, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 2263, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 9294, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 50.097366333007812, 02: max_size (i64) = 5143, 03: num_distinct_values (i64) = 1175, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 9518, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8520394563674927, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 24, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 37.609352111816406, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 1805, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.930443286895752, 02: max_size (i64) = 14, 03: num_distinct_values (i64) = 12, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 37.235195159912109, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 240271, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 233770, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 232171, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8.61578369140625, 02: max_size (i64) = 19, 03: num_distinct_values (i64) = 12, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3.4931962490081787, 02: max_size (i64) = 7, 03: num_distinct_values (i64) = 32, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.1478557586669922, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 49036, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "user ID of the logged in user (hashed)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 15.979450225830078, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 634, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "State from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.46828320622444153, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 38, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Country from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.5372523069381714, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 60, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struc I0123 01:10:22.349876 1914 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:22.350879 1914 HdfsTable.java:441] Loading disk ids for: history.h5_ui. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:22.382128 12621 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.22:38759 took 1s145ms I0123 01:10:22.440480 11522 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43410, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43410, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "h5_ui_errors", 04: id (i32) = 4139, 05: access_level (i32) = 1, 06: columns (list) = list[20] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 1203160, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "user", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.47408992052078247, 02: max_size (i64) = 68, 03: num_distinct_values (i64) = 530, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "message", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 449.67361450195312, 02: max_size (i64) = 1043850, 03: num_distinct_values (i64) = 34690, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 31, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 11.391100883483887, 02: max_size (i64) = 111, 03: num_distinct_values (i64) = 991, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "client_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 4.7701358795166016, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 1371, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 133.04139709472656, 02: max_size (i64) = 738, 03: num_distinct_values (i64) = 64362, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 2865, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "stack_trace", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5949.24267578125, 02: max_size (i64) = 154967, 03: num_distinct_values (i64) = 25787, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "creation_date", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13.024072647094727, 02: max_size (i64) = 62, 03: num_distinct_values (i64) = 1050405, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 24.760128021240234, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 13159, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "product", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16.999979019165039, 02: max_size (i64) = 17, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 26, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "rating", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "ua", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 104.68424987792969, 02: max_size (i64) = 274, 03: num_distinct_values (i64) = 4757, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.251823425292969, 02: max_size (i64) = 41, 03: num_distinct_values (i64) = 1252171, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 903572, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 1177016, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 1262115, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/h5_ui_errors", 02: colNames (list) = list[23] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "user", [5] = "message", [6] = "state_from_ip", [7] = "version", [8] = "client_id", [9] = "url", [10] = "received_time", [11] = "stack_trace", [12] = "pa__is_external", [13] = "creation_date", [14] = "pa__collector_instance_id", [15] = "product", [16] = "country_from_ip", [17] = "rating", [18] = "ua", [19] = "pa__bundle__fk", [20] = "pa__arrival_ts", [21] = "pa__processed_ts", [22] = "_v", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[421] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 12833 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1457308800, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsphere_h5.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "4d4949b717c92da7-832297456f4a319b_1207033728_data.0.parq", 02: length (i64) = 4653, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461921384582, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 4653, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1457308800/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 11, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 12833, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYWABsUjA1zdGF0[...](1396)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "11", "rawDataSize" -> "-1", "totalSize" -> "4653", "transient_lastDdlTime" -> "1461921385", }, }, 12835 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1457395200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsphere_h5.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "4d4949b717c92da7-832297456f4a3197_116194188_data.0.parq", 02: length (i64) = 63729, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461921384579, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 63729, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1457395200/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 420, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 12835, 15: hms_parameters (map) = map[10] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBbIBgAbFIwNc3Rh[...](4000)", "impala_intermediate_stats_chunk1" -> "AAMAAQAAAQACAAAC[...](4000)", "impala_intermediate_stats_chunk2" -> "PxbIBgACX3YYCP8A[...](460)", "impala_intermediate_stats_num_chunks" -> "3", "numFiles" -> "1", "numRows" -> "420", "rawDataSi I0123 01:10:22.461436 20676 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43414, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43414, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "esxi_crashreport", 04: id (i32) = 4121, 05: access_level (i32) = 1, 06: columns (list) = list[14] { [0] = TColumn { 01: columnName (string) = "stack", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 606.526123046875, 02: max_size (i64) = 6276, 03: num_distinct_values (i64) = 12874, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 91869, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "timestamp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 28.641742706298828, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 77625, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "count", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.0094617605209351, 02: max_size (i64) = 4, 03: num_distinct_values (i64) = 105, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "cause", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 54.00946044921875, 02: max_size (i64) = 2174, 03: num_distinct_values (i64) = 1750, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 11.062822341918945, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 12718, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 38.995559692382812, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 82764, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 87248, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 80395, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Time recorded when the request is received by the web server", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "_v", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Client version to be used for continual deployments like Flings or SaaS applications. ", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3, 02: max_size (i64) = 3, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "Country from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: comment (string) = "State from which the user is using the UI (derived)", 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0, 02: max_size (i64) = 0, 03: num_distinct_values (i64) = 0, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 84445, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/esxi_crashreport", 02: colNames (list) = list[17] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "stack", [4] = "id", [5] = "timestamp", [6] = "count", [7] = "cause", [8] = "pa__is_external", [9] = "pa__collector_instance_id", [10] = "pa__bundle__fk", [11] = "pa__arrival_ts", [12] = "pa__processed_ts", [13] = "received_time", [14] = "_v", [15] = "country_from_ip", [16] = "state_from_ip", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[405] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 1459 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1449446400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "80479185a9f282b8-8be1c5cb918d33af_1593914770_data.0.parq", 02: length (i64) = 2195, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461861730101, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2195, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1449446400/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 1459, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYCABsOjAVzdGFj[...](796)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "1", "rawDataSize" -> "-1", "totalSize" -> "2195", "transient_lastDdlTime" -> "1461861732", }, }, 1460 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1449532800, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "80479185a9f282b8-8be1c5cb918d33ae_537400940_data.0.parq", 02: length (i64) = 2482, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461861730057, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2482, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 0, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1449532800/pa__collector_id=hostclient.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 3, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 1460, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYGABsOjAVzdGFj[...](844)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "3", "rawDataSize" -> "-1", "totalSize" -> "2482", "transient_lastDdlTime" -> "1461861731", }, }, 1461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1450310400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "hostclient.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "80479185a9f282b8-8be1c5cb918d33b3_299503329_data.0.parq", 02: length (i64) = 2161, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1461861729759, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 2161, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 3, [2] = 4, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, I0123 01:10:22.471185 11522 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:33385 took 1s249ms I0123 01:10:22.479312 20676 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.25:58551 took 292.000ms I0123 01:10:22.714704 1914 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43415, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43415, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "h5_ui", 04: id (i32) = 4261, 05: access_level (i32) = 1, 06: columns (list) = list[44] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 8136746, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 1013025, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.4123353958129883, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 695, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 20.313512802124023, 02: max_size (i64) = 236, 03: num_distinct_values (i64) = 256499, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.9999046325683594, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 658692, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.0937662124633789, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 4116, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32.891670227050781, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 28731, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "e_a", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 27.433856964111328, 02: max_size (i64) = 72, 03: num_distinct_values (i64) = 316, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.7655916213989258, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 28, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.28558772802352905, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 51, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6.5882635116577148, 02: max_size (i64) = 25, 03: num_distinct_values (i64) = 32, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "e_n", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 65.700752258300781, 02: max_size (i64) = 5891, 03: num_distinct_values (i64) = 467670, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 108.94293975830078, 02: max_size (i64) = 727, 03: num_distinct_values (i64) = 1531624, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2.0322856903076172, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 89, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.76013314723968506, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.6589550971984863, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 24, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 1573824, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 55.382312774658203, 02: max_size (i64) = 64, 03: num_distinct_values (i64) = 398706, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.836124062538147, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3.2257623672485352, 02: max_size (i64) = 105, 03: num_distinct_values (i64) = 13024, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 93.133033752441406, 02: max_size (i64) = 731, 03: num_distinct_values (i64) = 1229731, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "e_c", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16.675926208496094, 02: max_size (i64) = 18, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 476226, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.1596364974975586, 02: max_size (i64) = 37, 03: num_distinct_values (i64) = 3451, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 15.999181747436523, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 560541, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.2025564908981323, 02: max_size (i64) = 4, 03: num_distinct_values (i64) = 1095, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8352814912796021, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, [39] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.527866363525391, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 8432198, 04: num_nulls (i64) = -1, }, 05: position (i32) = 42, }, [40] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, I0123 01:10:22.732357 4162 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43413, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43413, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[819] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 01:10:22.754794 4162 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.22:58692 took 875.000ms I0123 01:10:22.772971 1914 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.26:54901 took 553.000ms I0123 01:10:22.877358 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:22.877524 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:22.954248 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:22.954432 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:23.511273 11226 catalog-server.cc:316] Publishing update: TABLE:history.bundle@43398 I0123 01:10:23.657447 21486 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:53772) I0123 01:10:23.658335 21486 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "airwatch-admin-ui.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:23.659584 21486 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.bundle pa__arrival_day=1485129600/pa__collector_id=airwatch-admin-ui.1_0/pa__schema_version=1 I0123 01:10:23.695709 21486 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:10:23.697378 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:10:23.699041 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:10:23.700423 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:10:23.701866 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:10:23.703171 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:10:23.704314 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:10:23.705646 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:10:23.706935 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:10:23.708212 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:10:23.709471 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:10:23.710630 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:10:23.711736 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:10:23.712828 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:10:23.713907 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:10:23.715687 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:10:23.716872 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:10:23.718392 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:10:23.719501 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:10:23.720441 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:10:23.721566 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:10:23.722997 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:10:23.724246 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:10:23.725464 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:10:23.726668 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:10:23.727677 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:10:23.729969 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:10:23.731310 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:10:23.732484 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:10:23.733520 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:10:23.734813 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:10:23.736440 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:10:23.737362 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:10:23.738584 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:10:23.739598 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:10:23.740919 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:10:23.743353 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:10:23.744527 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:10:23.745679 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:10:23.746908 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:10:23.748014 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:10:23.749274 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:10:23.750267 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:10:23.751282 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:10:23.752562 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:10:23.753767 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:10:23.754766 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:10:23.756399 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:10:23.757529 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:10:23.758764 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:10:23.759769 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:10:23.760802 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:10:23.762383 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:10:23.763523 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:10:23.764576 21486 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:10:23.768997 21486 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:23.770414 21486 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:23.878401 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:23.878523 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:24.103935 21486 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43416, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43416, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[819] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 01:10:24.116214 21486 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:53772 took 459.000ms I0123 01:10:24.152868 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43398 I0123 01:10:24.218042 10057 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:57413) I0123 01:10:24.218272 10057 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "h5_ui", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "vsphere_h5.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:24.220121 10057 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history.h5_ui pa__arrival_day=1485129600/pa__collector_id=vsphere_h5.1_0/pa__schema_version=1 I0123 01:10:24.265018 10057 HdfsTable.java:348] load block md for h5_ui file part-00000 I0123 01:10:24.267470 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_1 I0123 01:10:24.268779 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_10 I0123 01:10:24.270982 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_11 I0123 01:10:24.272068 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_12 I0123 01:10:24.273973 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_13 I0123 01:10:24.275547 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_14 I0123 01:10:24.277566 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_15 I0123 01:10:24.278827 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_16 I0123 01:10:24.279992 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_17 I0123 01:10:24.281168 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_18 I0123 01:10:24.282281 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_19 I0123 01:10:24.283354 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_2 I0123 01:10:24.284782 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_20 I0123 01:10:24.285979 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_21 I0123 01:10:24.287225 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_22 I0123 01:10:24.288465 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_23 I0123 01:10:24.290565 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_24 I0123 01:10:24.291709 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_25 I0123 01:10:24.293005 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_26 I0123 01:10:24.294117 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_27 I0123 01:10:24.295315 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_28 I0123 01:10:24.296442 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_29 I0123 01:10:24.297508 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_3 I0123 01:10:24.298599 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_30 I0123 01:10:24.299592 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_31 I0123 01:10:24.301512 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_32 I0123 01:10:24.303295 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_33 I0123 01:10:24.304396 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_34 I0123 01:10:24.305532 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_35 I0123 01:10:24.306752 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_36 I0123 01:10:24.307909 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_37 I0123 01:10:24.309099 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_38 I0123 01:10:24.310317 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_39 I0123 01:10:24.311422 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_4 I0123 01:10:24.312630 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_40 I0123 01:10:24.313640 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_41 I0123 01:10:24.315008 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_42 I0123 01:10:24.316630 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_43 I0123 01:10:24.317875 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_44 I0123 01:10:24.319190 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_45 I0123 01:10:24.320351 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_46 I0123 01:10:24.321362 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_47 I0123 01:10:24.322866 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_48 I0123 01:10:24.324120 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_49 I0123 01:10:24.325304 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_5 I0123 01:10:24.326402 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_50 I0123 01:10:24.327780 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_51 I0123 01:10:24.328876 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_52 I0123 01:10:24.330210 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_53 I0123 01:10:24.331585 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_54 I0123 01:10:24.332644 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_6 I0123 01:10:24.333719 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_7 I0123 01:10:24.335320 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_8 I0123 01:10:24.336400 10057 HdfsTable.java:348] load block md for h5_ui file part-00000_copy_9 I0123 01:10:24.341167 10057 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:24.342211 10057 HdfsTable.java:441] Loading disk ids for: history.h5_ui. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:24.601361 10057 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43417, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43417, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "h5_ui", 04: id (i32) = 4261, 05: access_level (i32) = 1, 06: columns (list) = list[44] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36, 02: max_size (i64) = 36, 03: num_distinct_values (i64) = 8136746, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "r", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6, 02: max_size (i64) = 6, 03: num_distinct_values (i64) = 1013025, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "ua_browser_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.4123353958129883, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 695, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "action_name", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 20.313512802124023, 02: max_size (i64) = 236, 03: num_distinct_values (i64) = 256499, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pdf", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "_viewts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.9999046325683594, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 658692, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "_refts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.0937662124633789, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 4116, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "qt", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "fla", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "gears", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32.891670227050781, 02: max_size (i64) = 40, 03: num_distinct_values (i64) = 28731, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "e_a", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 27.433856964111328, 02: max_size (i64) = 72, 03: num_distinct_values (i64) = 316, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "ua_os", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 5.7655916213989258, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 28, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "state_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.28558772802352905, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 51, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "ua_browser", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 6.5882635116577148, 02: max_size (i64) = 25, 03: num_distinct_values (i64) = 32, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "e_n", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 65.700752258300781, 02: max_size (i64) = 5891, 03: num_distinct_values (i64) = 467670, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "url", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 108.94293975830078, 02: max_size (i64) = 727, 03: num_distinct_values (i64) = 1531624, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "java", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, [18] = TColumn { 01: columnName (string) = "wma", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 21, }, [19] = TColumn { 01: columnName (string) = "ua_os_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2.0322856903076172, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 89, 04: num_nulls (i64) = -1, }, 05: position (i32) = 22, }, [20] = TColumn { 01: columnName (string) = "country_from_ip", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.76013314723968506, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 132, 04: num_nulls (i64) = -1, }, 05: position (i32) = 23, }, [21] = TColumn { 01: columnName (string) = "h", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.6589550971984863, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 24, 04: num_nulls (i64) = -1, }, 05: position (i32) = 24, }, [22] = TColumn { 01: columnName (string) = "cookie", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 25, }, [23] = TColumn { 01: columnName (string) = "received_time", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 13, 02: max_size (i64) = 13, 03: num_distinct_values (i64) = 1573824, 04: num_nulls (i64) = -1, }, 05: position (i32) = 26, }, [24] = TColumn { 01: columnName (string) = "_idn", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 27, }, [25] = TColumn { 01: columnName (string) = "dir", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 28, }, [26] = TColumn { 01: columnName (string) = "realp", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 29, }, [27] = TColumn { 01: columnName (string) = "uid", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 55.382312774658203, 02: max_size (i64) = 64, 03: num_distinct_values (i64) = 398706, 04: num_nulls (i64) = -1, }, 05: position (i32) = 30, }, [28] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 31, }, [29] = TColumn { 01: columnName (string) = "ag", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 32, }, [30] = TColumn { 01: columnName (string) = "m", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.836124062538147, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 33, }, [31] = TColumn { 01: columnName (string) = "gt_ms", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 3.2257623672485352, 02: max_size (i64) = 105, 03: num_distinct_values (i64) = 13024, 04: num_nulls (i64) = -1, }, 05: position (i32) = 34, }, [32] = TColumn { 01: columnName (string) = "urlref", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 93.133033752441406, 02: max_size (i64) = 731, 03: num_distinct_values (i64) = 1229731, 04: num_nulls (i64) = -1, }, 05: position (i32) = 35, }, [33] = TColumn { 01: columnName (string) = "e_c", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16.675926208496094, 02: max_size (i64) = 18, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 36, }, [34] = TColumn { 01: columnName (string) = "_idts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10, 02: max_size (i64) = 10, 03: num_distinct_values (i64) = 476226, 04: num_nulls (i64) = -1, }, 05: position (i32) = 37, }, [35] = TColumn { 01: columnName (string) = "res", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 9.1596364974975586, 02: max_size (i64) = 37, 03: num_distinct_values (i64) = 3451, 04: num_nulls (i64) = -1, }, 05: position (i32) = 38, }, [36] = TColumn { 01: columnName (string) = "_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 15.999181747436523, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 560541, 04: num_nulls (i64) = -1, }, 05: position (i32) = 39, }, [37] = TColumn { 01: columnName (string) = "_idvc", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.2025564908981323, 02: max_size (i64) = 4, 03: num_distinct_values (i64) = 1095, 04: num_nulls (i64) = -1, }, 05: position (i32) = 40, }, [38] = TColumn { 01: columnName (string) = "s", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1.8352814912796021, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 58, 04: num_nulls (i64) = -1, }, 05: position (i32) = 41, }, [39] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.527866363525391, 02: max_size (i64) = 39, 03: num_distinct_values (i64) = 8432198, 04: num_nulls (i64) = -1, }, 05: position (i32) = 42, }, [40] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, I0123 01:10:24.655748 10057 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:57413 took 437.000ms I0123 01:10:24.875526 12622 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.21:53981) I0123 01:10:24.875722 12622 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "hostclient.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:24.876520 12622 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.bundle pa__arrival_day=1485129600/pa__collector_id=hostclient.1_0/pa__schema_version=1 I0123 01:10:24.878908 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:24.879001 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:24.912885 12622 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:10:24.914201 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:10:24.915626 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:10:24.916859 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:10:24.918478 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:10:24.919528 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:10:24.920496 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:10:24.921469 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:10:24.922435 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:10:24.923426 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:10:24.925395 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:10:24.926415 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:10:24.927425 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:10:24.928567 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:10:24.929718 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:10:24.930945 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:10:24.932333 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:10:24.933308 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:10:24.934233 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:10:24.935191 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:10:24.936161 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:10:24.938578 12622 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:10:24.947847 12622 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:24.949800 12622 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:24.954814 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:25.180037 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 225.000ms I0123 01:10:25.262073 12622 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43418, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43418, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[819] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 01:10:25.275595 12622 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.21:53981 took 400.000ms I0123 01:10:25.880087 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:25.880170 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:26.126490 18966 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43398, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43398, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "bundle", 04: id (i32) = 4153, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 10202831, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 322605, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.25181648135185242, 02: max_size (i64) = 21, 03: num_distinct_values (i64) = 8, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7.9655427932739258, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 4, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 14, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.735977172851562, 02: max_size (i64) = 88, 03: num_distinct_values (i64) = 925201, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 555701, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.148296356201172, 02: max_size (i64) = 91, 03: num_distinct_values (i64) = 1050297, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 17700588, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 19070410, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 8213797, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 5258683, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.111308097839355, 02: max_size (i64) = 42, 03: num_distinct_values (i64) = 111645, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 23289772, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "pa__kafka_partition_offset", [18] = "pa__kafka_partition", [19] = "envelope_ts", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[18138] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 17022 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 0, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000004_999727056_data.0.parq", 02: length (i64) = 9779, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746761391, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 9779, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=0/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 50, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17022, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBZkABsSjAtlbnZl[...](1784)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "50", "rawDataSize" -> "-1", "totalSize" -> "9779", "transient_lastDdlTime" -> "1484746956", }, }, 17023 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409529600, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000009_150531334_data.0.parq", 02: length (i64) = 3417, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746762924, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3417, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1409529600/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 3, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17023, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYGABsSjAtlbnZl[...](1172)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "3", "rawDataSize" -> "-1", "totalSize" -> "3417", "transient_lastDdlTime" -> "1484746954", }, }, 17024 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409702400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = T I0123 01:10:26.886441 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:26.886622 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:27.590154 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:27.817418 18966 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.15:48708 took 58s260ms I0123 01:10:27.887938 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:27.888082 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:28.165923 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 576.000ms I0123 01:10:28.263514 11522 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.23:33385) I0123 01:10:28.265590 11522 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history_staging", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "nova.poc", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:28.271857 11522 CatalogServiceCatalog.java:1191] Refreshing Partition metadata: history_staging.bundle pa__arrival_day=1485129600/pa__collector_id=nova.poc/pa__schema_version=1 I0123 01:10:28.340600 11226 catalog-server.cc:316] Publishing update: TABLE:history.itfm_ui@43411 I0123 01:10:28.342967 11522 HdfsTable.java:348] load block md for bundle file part-00000 I0123 01:10:28.347645 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_1 I0123 01:10:28.349483 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_10 I0123 01:10:28.351697 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_100 I0123 01:10:28.352659 11226 catalog-server.cc:316] Publishing update: TABLE:history.esxi_hostinfo@43404 I0123 01:10:28.355324 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_101 I0123 01:10:28.361907 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_102 I0123 01:10:28.363729 11226 catalog-server.cc:316] Publishing update: TABLE:history.airwatch_console@43400 I0123 01:10:28.364259 11226 catalog-server.cc:316] Publishing update: TABLE:history.h5_ui@43417 I0123 01:10:28.365401 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_103 I0123 01:10:28.367802 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_104 I0123 01:10:28.369810 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_105 I0123 01:10:28.372113 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_106 I0123 01:10:28.373893 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_107 I0123 01:10:28.376723 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_108 I0123 01:10:28.380657 11226 catalog-server.cc:316] Publishing update: TABLE:history.esxi_crashreport@43414 I0123 01:10:28.380831 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_109 I0123 01:10:28.385586 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_11 I0123 01:10:28.387424 11226 catalog-server.cc:316] Publishing update: TABLE:history.h5_ui_errors@43410 I0123 01:10:28.388413 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_110 I0123 01:10:28.389914 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_111 I0123 01:10:28.393472 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_112 I0123 01:10:28.395524 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_113 I0123 01:10:28.395606 11226 catalog-server.cc:316] Publishing update: TABLE:history.sa_issue_rating@43399 I0123 01:10:28.396021 11226 catalog-server.cc:316] Publishing update: TABLE:history.pa__streaming_batch@43401 I0123 01:10:28.397948 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_114 I0123 01:10:28.399312 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_115 I0123 01:10:28.401437 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_116 I0123 01:10:28.402799 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_117 I0123 01:10:28.403931 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_118 I0123 01:10:28.405336 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.airwatch_console@43407 I0123 01:10:28.406428 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.esxi_crashreport@43412 I0123 01:10:28.406608 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_119 I0123 01:10:28.406975 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.astro_ui@43403 I0123 01:10:28.407891 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_12 I0123 01:10:28.409119 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.esxi_hostinfo@43406 I0123 01:10:28.409981 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.bundle@43418 I0123 01:10:28.410212 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_120 I0123 01:10:28.412617 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_121 I0123 01:10:28.413895 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_122 I0123 01:10:28.415380 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_123 I0123 01:10:28.416656 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_124 I0123 01:10:28.418413 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_125 I0123 01:10:28.421350 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_126 I0123 01:10:28.422955 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_127 I0123 01:10:28.424489 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_128 I0123 01:10:28.426264 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_129 I0123 01:10:28.427259 11226 catalog-server.cc:316] Publishing update: TABLE:staging.ph_dow_20170123_010826_bundle@43409 I0123 01:10:28.427783 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_13 I0123 01:10:28.428325 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43418 I0123 01:10:28.430541 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_130 I0123 01:10:28.434368 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_131 I0123 01:10:28.435916 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_132 I0123 01:10:28.438557 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_133 I0123 01:10:28.440356 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_134 I0123 01:10:28.441588 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_135 I0123 01:10:28.443111 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_136 I0123 01:10:28.445210 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_137 I0123 01:10:28.447554 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_138 I0123 01:10:28.449416 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_139 I0123 01:10:28.452025 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_14 I0123 01:10:28.453467 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_140 I0123 01:10:28.454567 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_141 I0123 01:10:28.456405 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_142 I0123 01:10:28.458046 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_143 I0123 01:10:28.460309 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_144 I0123 01:10:28.461688 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_145 I0123 01:10:28.463316 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_146 I0123 01:10:28.465627 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_147 I0123 01:10:28.467109 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_148 I0123 01:10:28.468513 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_149 I0123 01:10:28.469964 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_15 I0123 01:10:28.473534 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_150 I0123 01:10:28.475013 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_151 I0123 01:10:28.476192 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_152 I0123 01:10:28.478500 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_153 I0123 01:10:28.480173 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_154 I0123 01:10:28.481529 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_155 I0123 01:10:28.483085 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_156 I0123 01:10:28.486701 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_157 I0123 01:10:28.488065 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_158 I0123 01:10:28.490736 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_159 I0123 01:10:28.493288 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_16 I0123 01:10:28.497584 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_160 I0123 01:10:28.500366 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_161 I0123 01:10:28.501878 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_162 I0123 01:10:28.505110 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_163 I0123 01:10:28.506603 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_164 I0123 01:10:28.508049 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_165 I0123 01:10:28.509742 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_166 I0123 01:10:28.511571 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_167 I0123 01:10:28.513015 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_168 I0123 01:10:28.514595 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_169 I0123 01:10:28.515961 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_17 I0123 01:10:28.517972 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_170 I0123 01:10:28.519174 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_171 I0123 01:10:28.520401 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_172 I0123 01:10:28.521833 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_173 I0123 01:10:28.524093 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_174 I0123 01:10:28.525631 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_175 I0123 01:10:28.527106 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_176 I0123 01:10:28.528592 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_177 I0123 01:10:28.531260 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_178 I0123 01:10:28.532632 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_179 I0123 01:10:28.533913 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_18 I0123 01:10:28.535323 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_180 I0123 01:10:28.537971 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_181 I0123 01:10:28.540593 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_182 I0123 01:10:28.542011 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_183 I0123 01:10:28.544452 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_184 I0123 01:10:28.545956 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_185 I0123 01:10:28.548095 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_186 I0123 01:10:28.549512 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_187 I0123 01:10:28.551086 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_188 I0123 01:10:28.552839 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_189 I0123 01:10:28.554432 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_19 I0123 01:10:28.555575 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_190 I0123 01:10:28.557456 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_191 I0123 01:10:28.558954 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_192 I0123 01:10:28.560844 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_193 I0123 01:10:28.564671 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_194 I0123 01:10:28.566191 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_195 I0123 01:10:28.568437 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_196 I0123 01:10:28.570420 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_197 I0123 01:10:28.571871 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_198 I0123 01:10:28.573714 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_199 I0123 01:10:28.574856 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_2 I0123 01:10:28.576848 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_20 I0123 01:10:28.578531 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_200 I0123 01:10:28.581722 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_201 I0123 01:10:28.583405 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_202 I0123 01:10:28.585446 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_203 I0123 01:10:28.587795 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_204 I0123 01:10:28.589799 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_205 I0123 01:10:28.591322 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_206 I0123 01:10:28.592939 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_207 I0123 01:10:28.596135 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_208 I0123 01:10:28.597663 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_209 I0123 01:10:28.599226 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_21 I0123 01:10:28.601301 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_210 I0123 01:10:28.602776 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_211 I0123 01:10:28.605757 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_212 I0123 01:10:28.609138 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_213 I0123 01:10:28.614677 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_214 I0123 01:10:28.617753 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_215 I0123 01:10:28.622299 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_216 I0123 01:10:28.624035 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_217 I0123 01:10:28.625816 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_218 I0123 01:10:28.627419 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_219 I0123 01:10:28.629163 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_22 I0123 01:10:28.630494 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_220 I0123 01:10:28.633478 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_221 I0123 01:10:28.634990 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_222 I0123 01:10:28.636492 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_223 I0123 01:10:28.637853 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_224 I0123 01:10:28.639310 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_225 I0123 01:10:28.640498 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_226 I0123 01:10:28.642976 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_227 I0123 01:10:28.644176 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_228 I0123 01:10:28.649024 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_229 I0123 01:10:28.650585 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_23 I0123 01:10:28.651921 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_230 I0123 01:10:28.657048 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_231 I0123 01:10:28.661038 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_232 I0123 01:10:28.662536 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_233 I0123 01:10:28.663676 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_234 I0123 01:10:28.664976 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_235 I0123 01:10:28.671380 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_236 I0123 01:10:28.672627 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_237 I0123 01:10:28.674160 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_238 I0123 01:10:28.675421 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_239 I0123 01:10:28.679085 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_24 I0123 01:10:28.680471 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_240 I0123 01:10:28.681689 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_241 I0123 01:10:28.682996 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_242 I0123 01:10:28.688217 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_243 I0123 01:10:28.689605 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_244 I0123 01:10:28.691561 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_245 I0123 01:10:28.695726 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_246 I0123 01:10:28.696990 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_247 I0123 01:10:28.698441 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_248 I0123 01:10:28.702862 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_249 I0123 01:10:28.704349 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_25 I0123 01:10:28.705591 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_250 I0123 01:10:28.706847 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_251 I0123 01:10:28.708112 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_252 I0123 01:10:28.709408 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_253 I0123 01:10:28.711230 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_254 I0123 01:10:28.713204 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_255 I0123 01:10:28.714318 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_256 I0123 01:10:28.717181 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_257 I0123 01:10:28.718762 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_258 I0123 01:10:28.720027 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_259 I0123 01:10:28.721796 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_26 I0123 01:10:28.727301 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_260 I0123 01:10:28.728291 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_261 I0123 01:10:28.730258 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_262 I0123 01:10:28.733517 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_263 I0123 01:10:28.737256 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_264 I0123 01:10:28.741436 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_265 I0123 01:10:28.743360 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_266 I0123 01:10:28.747237 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_267 I0123 01:10:28.748461 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_268 I0123 01:10:28.750138 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_269 I0123 01:10:28.751281 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_27 I0123 01:10:28.752404 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_270 I0123 01:10:28.753293 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_271 I0123 01:10:28.754478 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_272 I0123 01:10:28.756551 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_273 I0123 01:10:28.757570 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_274 I0123 01:10:28.758546 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_28 I0123 01:10:28.759727 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_29 I0123 01:10:28.760785 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_3 I0123 01:10:28.763092 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_30 I0123 01:10:28.764253 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_31 I0123 01:10:28.765314 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_32 I0123 01:10:28.766445 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_33 I0123 01:10:28.767496 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_34 I0123 01:10:28.769822 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_35 I0123 01:10:28.770952 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_36 I0123 01:10:28.771986 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_37 I0123 01:10:28.773039 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_38 I0123 01:10:28.773957 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_39 I0123 01:10:28.775223 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_4 I0123 01:10:28.776578 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_40 I0123 01:10:28.777822 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_41 I0123 01:10:28.778959 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_42 I0123 01:10:28.779922 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_43 I0123 01:10:28.780972 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_44 I0123 01:10:28.783052 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_45 I0123 01:10:28.784351 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_46 I0123 01:10:28.785580 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_47 I0123 01:10:28.786753 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_48 I0123 01:10:28.787971 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_49 I0123 01:10:28.789739 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_5 I0123 01:10:28.790848 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_50 I0123 01:10:28.792172 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_51 I0123 01:10:28.793215 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_52 I0123 01:10:28.794337 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_53 I0123 01:10:28.796316 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_54 I0123 01:10:28.797570 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_55 I0123 01:10:28.798720 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_56 I0123 01:10:28.799944 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_57 I0123 01:10:28.801053 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_58 I0123 01:10:28.802650 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_59 I0123 01:10:28.803679 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_6 I0123 01:10:28.804808 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_60 I0123 01:10:28.805830 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_61 I0123 01:10:28.806792 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_62 I0123 01:10:28.809217 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_63 I0123 01:10:28.810209 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_64 I0123 01:10:28.811398 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_65 I0123 01:10:28.812777 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_66 I0123 01:10:28.813935 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_67 I0123 01:10:28.816190 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_68 I0123 01:10:28.817203 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_69 I0123 01:10:28.818331 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_7 I0123 01:10:28.819332 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_70 I0123 01:10:28.820411 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_71 I0123 01:10:28.822882 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_72 I0123 01:10:28.823937 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_73 I0123 01:10:28.825166 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_74 I0123 01:10:28.826175 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_75 I0123 01:10:28.827235 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_76 I0123 01:10:28.829191 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_77 I0123 01:10:28.830442 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_78 I0123 01:10:28.831640 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_79 I0123 01:10:28.832870 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_8 I0123 01:10:28.834192 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_80 I0123 01:10:28.836854 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_81 I0123 01:10:28.838116 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_82 I0123 01:10:28.839356 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_83 I0123 01:10:28.840464 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_84 I0123 01:10:28.841569 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_85 I0123 01:10:28.842840 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_86 I0123 01:10:28.843986 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_87 I0123 01:10:28.845078 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_88 I0123 01:10:28.846257 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_89 I0123 01:10:28.847370 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_9 I0123 01:10:28.848579 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_90 I0123 01:10:28.851246 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_91 I0123 01:10:28.855265 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_92 I0123 01:10:28.856302 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_93 I0123 01:10:28.859288 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_94 I0123 01:10:28.862099 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_95 I0123 01:10:28.865380 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_96 I0123 01:10:28.866716 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_97 I0123 01:10:28.867766 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_98 I0123 01:10:28.869675 11522 HdfsTable.java:348] load block md for bundle file part-00000_copy_99 I0123 01:10:28.873139 11522 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:792) at com.cloudera.impala.catalog.HdfsTable.reloadPartition(HdfsTable.java:1951) at com.cloudera.impala.catalog.CatalogServiceCatalog.reloadPartition(CatalogServiceCatalog.java:1210) at com.cloudera.impala.service.CatalogOpExecutor.execResetMetadata(CatalogOpExecutor.java:2707) at com.cloudera.impala.service.JniCatalog.resetMetadata(JniCatalog.java:151) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 12 more I0123 01:10:28.874419 11522 HdfsTable.java:441] Loading disk ids for: history_staging.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:28.888455 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:28.888552 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:29.163014 11522 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43419, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43419, 05: table (struct) = TTable { 01: db_name (string) = "history_staging", 02: tbl_name (string) = "bundle", 04: id (i32) = 4125, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 77, 04: num_nulls (i64) = 0, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = 90, 04: num_nulls (i64) = 0, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = 0, }, 05: position (i32) = 2, }, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history_staging.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "envelope_ts", [18] = "pa__kafka_partition_offset", [19] = "pa__kafka_partition", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[819] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 4461 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462406400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "kafka-output", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f0000000d_592464453_data.0.parq", 02: length (i64) = 3029, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824254, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3029, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 2, [1] = 2, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462406400/pa__collector_id=kafka-output/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4461, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3029", "transient_lastDdlTime" -> "1484725727", }, }, 4462 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1462838400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "vsm.1_0", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "b434d4229412fbf-8d7ef47f00000002_1792732270_data.0.parq", 02: length (i64) = 3227, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1482419824277, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3227, 03: replica_host_idxs (list) = list[3] { [0] = 2, [1] = 3, [2] = 0, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 0, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1462838400/pa__collector_id=vsm.1_0/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 4462, 15: hms_parameters (map) = map[6] { "COLUMN_STATS_ACCURATE" -> "false", "numFiles" -> "1", "numRows" -> "-1", "rawDataSize" -> "-1", "totalSize" -> "3227", "transient_lastDdlTime" -> "1484725727", }, }, 4463 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1463875200, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { I0123 01:10:29.178357 11522 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.23:33385 took 915.000ms I0123 01:10:29.888931 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:29.889125 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:30.166584 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:30.260124 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 94.000ms I0123 01:10:30.308243 11226 catalog-server.cc:316] Publishing update: TABLE:history_staging.bundle@43419 I0123 01:10:30.323197 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43419 I0123 01:10:30.889849 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:30.889999 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:31.890995 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:31.891168 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:32.325672 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:32.502566 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 177.000ms I0123 01:10:32.891971 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:32.892081 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:33.893246 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:33.893354 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:34.505331 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:34.508103 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 3.000ms I0123 01:10:34.893911 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:34.894024 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:35.895045 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:35.895186 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:36.508692 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:36.508837 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:36.895490 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:36.895617 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:37.895889 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:37.895994 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:38.509248 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:38.509393 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:38.896957 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:38.897058 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:39.898049 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:39.898161 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:40.509999 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:40.510138 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:40.898866 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:40.899051 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:41.900151 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:41.900349 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:42.510608 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:42.510846 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:42.900642 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:42.900781 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:43.316685 7181 rpc-trace.cc:184] RPC call: CatalogService.UpdateCatalog(from 10.153.201.19:36786) I0123 01:10:43.317127 7181 catalog-server.cc:90] UpdateCatalog(): request=TUpdateCatalogRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics@PHONEHOME.VMWARE.COM", }, 03: target_table (string) = "bundle", 04: db_name (string) = "history", 05: created_partitions (set) = set[3] { "pa__arrival_day=1484956800/pa__collector_id=ph_downloader.1_0/pa__schema_version=1/", "pa__arrival_day=1485043200/pa__collector_id=ph_downloader.1_0/pa__schema_version=1/", "pa__arrival_day=1485129600/pa__collector_id=ph_downloader.1_0/pa__schema_version=1/", }, } W0123 01:10:43.435397 7181 MetaStoreUtils.java:338] Updating partition stats fast for: bundle W0123 01:10:43.446565 7181 MetaStoreUtils.java:341] Updated size to 8075 W0123 01:10:43.447160 7181 MetaStoreUtils.java:338] Updating partition stats fast for: bundle W0123 01:10:43.449048 7181 MetaStoreUtils.java:341] Updated size to 5295 I0123 01:10:43.505249 7181 CatalogOpExecutor.java:2591] Updating lastDdlTime for table: bundle I0123 01:10:43.581104 7181 HdfsTable.java:1038] incremental update for table: history.bundle I0123 01:10:43.581212 7181 HdfsTable.java:1103] sync table partitions: bundle I0123 01:10:43.900938 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:43.901031 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:44.083338 7181 MetaStoreUtil.java:129] Fetching 2 partitions for: history.bundle using partition batch size: 1000 I0123 01:10:44.119642 7181 HdfsTable.java:348] load block md for bundle file 4424df7470ab168-c2f7ef9f00000004_2139384333_data.0.parq I0123 01:10:44.126302 7181 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadPartitionsFromMetastore(HdfsTable.java:1387) at com.cloudera.impala.catalog.HdfsTable.updatePartitionsFromHms(HdfsTable.java:1155) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1044) at com.cloudera.impala.service.CatalogOpExecutor.loadTableMetadata(CatalogOpExecutor.java:477) at com.cloudera.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:2932) at com.cloudera.impala.service.JniCatalog.updateCatalog(JniCatalog.java:253) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 13 more I0123 01:10:44.129972 7181 HdfsTable.java:348] load block md for bundle file 4424df7470ab168-c2f7ef9f00000000_391025725_data.0.parq I0123 01:10:44.135036 7181 FsPermissionChecker.java:290] No ACLs retrieved, skipping ACLs check (HDFS will enforce ACLs) Java exception follows: org.apache.hadoop.hdfs.protocol.AclException: The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3344) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1983) at org.apache.hadoop.hdfs.DistributedFileSystem$41.doCall(DistributedFileSystem.java:1980) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getAclStatus(DistributedFileSystem.java:1980) at com.cloudera.impala.util.FsPermissionChecker.getPermissions(FsPermissionChecker.java:288) at com.cloudera.impala.catalog.HdfsTable.getAvailableAccessLevel(HdfsTable.java:760) at com.cloudera.impala.catalog.HdfsTable.createPartition(HdfsTable.java:844) at com.cloudera.impala.catalog.HdfsTable.loadPartitionsFromMetastore(HdfsTable.java:1387) at com.cloudera.impala.catalog.HdfsTable.updatePartitionsFromHms(HdfsTable.java:1155) at com.cloudera.impala.catalog.HdfsTable.load(HdfsTable.java:1044) at com.cloudera.impala.service.CatalogOpExecutor.loadTableMetadata(CatalogOpExecutor.java:477) at com.cloudera.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:2932) at com.cloudera.impala.service.JniCatalog.updateCatalog(JniCatalog.java:253) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AclException): The ACL operation has been rejected. Support for ACLs has been disabled by setting dfs.namenode.acls.enabled to false. at org.apache.hadoop.hdfs.server.namenode.NNConf.checkAclsConfigFlag(NNConf.java:85) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAclStatus(FSNamesystem.java:9094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAclStatus(NameNodeRpcServer.java:1617) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getAclStatus(AuthorizationProviderProxyClientProtocol.java:907) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAclStatus(ClientNamenodeProtocolServerSideTranslatorPB.java:1325) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1471) at org.apache.hadoop.ipc.Client.call(Client.java:1408) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy10.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAclStatus(ClientNamenodeProtocolTranslatorPB.java:1327) at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy11.getAclStatus(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getAclStatus(DFSClient.java:3342) ... 13 more I0123 01:10:44.136044 7181 HdfsTable.java:441] Loading disk ids for: history.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:44.181972 7181 HdfsTable.java:1416] loading file metadata for 3 partitions I0123 01:10:44.188529 7181 HdfsTable.java:348] load block md for bundle file 4424df7470ab168-c2f7ef9f00000005_525030401_data.0.parq I0123 01:10:44.192395 7181 HdfsTable.java:441] Loading disk ids for: history.bundle. nodes: 14. filesystem: hdfs://ph-hdp-prd-nn01:8020 I0123 01:10:44.511504 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:44.511662 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:44.901571 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:44.901731 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:45.901971 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:45.902149 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:46.447957 11226 catalog-server.cc:316] Publishing update: TABLE:history.bundle@43420 I0123 01:10:46.512554 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:46.512754 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:46.775131 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43420 I0123 01:10:46.903298 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:46.903499 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:47.904542 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:47.904685 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:48.513562 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:48.552069 7181 catalog-server.cc:96] UpdateCatalog(): response=TUpdateCatalogResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43420, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43420, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "bundle", 04: id (i32) = 4153, 05: access_level (i32) = 1, 06: columns (list) = list[18] { [0] = TColumn { 01: columnName (string) = "id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 3, }, [1] = TColumn { 01: columnName (string) = "internal_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 10202831, 04: num_nulls (i64) = -1, }, 05: position (i32) = 4, }, [2] = TColumn { 01: columnName (string) = "size_in_bytes", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 322605, 04: num_nulls (i64) = -1, }, 05: position (i32) = 5, }, [3] = TColumn { 01: columnName (string) = "ext", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 6, }, [4] = TColumn { 01: columnName (string) = "pa__detected_proxy_sources", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 0.25181648135185242, 02: max_size (i64) = 21, 03: num_distinct_values (i64) = 8, 04: num_nulls (i64) = -1, }, 05: position (i32) = 7, }, [5] = TColumn { 01: columnName (string) = "pa__proxy_source", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 7.9655427932739258, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 4, 04: num_nulls (i64) = -1, }, 05: position (i32) = 8, }, [6] = TColumn { 01: columnName (string) = "pa__os_language", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 2, 02: max_size (i64) = 2, 03: num_distinct_values (i64) = 14, 04: num_nulls (i64) = -1, }, 05: position (i32) = 9, }, [7] = TColumn { 01: columnName (string) = "collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.735977172851562, 02: max_size (i64) = 88, 03: num_distinct_values (i64) = 925201, 04: num_nulls (i64) = -1, }, 05: position (i32) = 10, }, [8] = TColumn { 01: columnName (string) = "collection__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 32, 02: max_size (i64) = 32, 03: num_distinct_values (i64) = 555701, 04: num_nulls (i64) = -1, }, 05: position (i32) = 11, }, [9] = TColumn { 01: columnName (string) = "pa__is_external", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 2, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 1, 02: max_size (i64) = 1, 03: num_distinct_values (i64) = 2, 04: num_nulls (i64) = -1, }, 05: position (i32) = 12, }, [10] = TColumn { 01: columnName (string) = "pa__collector_instance_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 33.148296356201172, 02: max_size (i64) = 91, 03: num_distinct_values (i64) = 1050297, 04: num_nulls (i64) = -1, }, 05: position (i32) = 13, }, [11] = TColumn { 01: columnName (string) = "pa__bundle__fk", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 36.555374145507812, 02: max_size (i64) = 117, 03: num_distinct_values (i64) = 23103018, 04: num_nulls (i64) = -1, }, 05: position (i32) = 14, }, [12] = TColumn { 01: columnName (string) = "pa__arrival_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 17700588, 04: num_nulls (i64) = -1, }, 05: position (i32) = 15, }, [13] = TColumn { 01: columnName (string) = "pa__processed_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 19070410, 04: num_nulls (i64) = -1, }, 05: position (i32) = 16, }, [14] = TColumn { 01: columnName (string) = "pa__kafka_partition_offset", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 8213797, 04: num_nulls (i64) = -1, }, 05: position (i32) = 17, }, [15] = TColumn { 01: columnName (string) = "pa__kafka_partition", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = 1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 18, }, [16] = TColumn { 01: columnName (string) = "envelope_ts", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 11, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 16, 02: max_size (i64) = 16, 03: num_distinct_values (i64) = 5258683, 04: num_nulls (i64) = -1, }, 05: position (i32) = 19, }, [17] = TColumn { 01: columnName (string) = "pa__client_ip_path", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 10.111308097839355, 02: max_size (i64) = 42, 03: num_distinct_values (i64) = 111645, 04: num_nulls (i64) = -1, }, 05: position (i32) = 20, }, }, 07: clustering_columns (list) = list[3] { [0] = TColumn { 01: columnName (string) = "pa__arrival_day", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 0, }, [1] = TColumn { 01: columnName (string) = "pa__collector_id", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = -1, 02: max_size (i64) = -1, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 1, }, [2] = TColumn { 01: columnName (string) = "pa__schema_version", 02: columnType (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 04: col_stats (struct) = TColumnStats { 01: avg_size (double) = 8, 02: max_size (i64) = 8, 03: num_distinct_values (i64) = -1, 04: num_nulls (i64) = -1, }, 05: position (i32) = 2, }, }, 08: table_stats (struct) = TTableStats { 01: num_rows (i64) = 23289772, }, 09: table_type (i32) = 0, 10: hdfs_table (struct) = THdfsTable { 01: hdfsBaseDir (string) = "hdfs://ph-hdp-prd-nn01:8020/user/hive/warehouse/history.db/bundle", 02: colNames (list) = list[21] { [0] = "pa__arrival_day", [1] = "pa__collector_id", [2] = "pa__schema_version", [3] = "id", [4] = "internal_id", [5] = "size_in_bytes", [6] = "ext", [7] = "pa__detected_proxy_sources", [8] = "pa__proxy_source", [9] = "pa__os_language", [10] = "collector_instance_id", [11] = "collection__fk", [12] = "pa__is_external", [13] = "pa__collector_instance_id", [14] = "pa__bundle__fk", [15] = "pa__arrival_ts", [16] = "pa__processed_ts", [17] = "pa__kafka_partition_offset", [18] = "pa__kafka_partition", [19] = "envelope_ts", [20] = "pa__client_ip_path", }, 03: nullPartitionKeyValue (string) = "__HIVE_DEFAULT_PARTITION__", 04: partitions (map) = map[18140] { -1 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[0] { }, 08: blockSize (i32) = 0, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = -1, }, 13: is_marked_cached (bool) = false, 14: id (i64) = -1, 15: hms_parameters (map) = map[0] { }, }, 17022 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 0, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000004_999727056_data.0.parq", 02: length (i64) = 9779, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746761391, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 9779, 03: replica_host_idxs (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 04: disk_ids (list) = list[3] { [0] = 1, [1] = 1, [2] = 1, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=0/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 50, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17022, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBZkABsSjAtlbnZl[...](1784)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "50", "rawDataSize" -> "-1", "totalSize" -> "9779", "transient_lastDdlTime" -> "1484746956", }, }, 17023 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409529600, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 11, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 12, }, }, }, }, 03: num_children (i32) = 0, 15: string_literal (struct) = TStringLiteral { 01: value (string) = "com.vmware.ph.vc55u2.nonintrusive", }, }, }, }, [2] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1, }, }, }, }, }, 08: blockSize (i32) = 0, 09: file_desc (list) = list[1] { [0] = THdfsFileDesc { 01: file_name (string) = "c44807a1221194b9-984fed3e00000009_150531334_data.0.parq", 02: length (i64) = 3417, 03: compression (i32) = 0, 04: last_modification_time (i64) = 1484746762924, 05: file_blocks (list) = list[1] { [0] = THdfsFileBlock { 01: offset (i64) = 0, 02: length (i64) = 3417, 03: replica_host_idxs (list) = list[3] { [0] = 3, [1] = 4, [2] = 5, }, 04: disk_ids (list) = list[3] { [0] = 0, [1] = 1, [2] = 2, }, 05: is_replica_cached (list) = list[3] { [0] = false, [1] = false, [2] = false, }, }, }, }, }, 10: location (struct) = THdfsPartitionLocation { 01: prefix_index (i32) = 0, 02: suffix (string) = "pa__arrival_day=1409529600/pa__collector_id=com.vmware.ph.vc55u2.nonintrusive/pa__schema_version=1", }, 11: access_level (i32) = 1, 12: stats (struct) = TTableStats { 01: num_rows (i64) = 3, }, 13: is_marked_cached (bool) = false, 14: id (i64) = 17023, 15: hms_parameters (map) = map[8] { "COLUMN_STATS_ACCURATE" -> "true", "impala_intermediate_stats_chunk0" -> "HBYGABsSjAtlbnZl[...](1172)", "impala_intermediate_stats_num_chunks" -> "1", "numFiles" -> "1", "numRows" -> "3", "rawDataSize" -> "-1", "totalSize" -> "3417", "transient_lastDdlTime" -> "1484746954", }, }, 17024 -> THdfsPartition { 01: lineDelim (byte) = 0x0a, 02: fieldDelim (byte) = 0x01, 03: collectionDelim (byte) = 0x01, 04: mapKeyDelim (byte) = 0x01, 05: escapeChar (byte) = 0x00, 06: fileFormat (i32) = 4, 07: partitionKeyExprs (list) = list[3] { [0] = TExpr { 01: nodes (list) = list[1] { [0] = TExprNode { 01: node_type (i32) = 4, 02: type (struct) = TColumnType { 01: types (list) = list[1] { [0] = TTypeNode { 01: type (i32) = 0, 02: scalar_type (struct) = TScalarType { 01: type (i32) = 6, }, }, }, }, 03: num_children (i32) = 0, 10: int_literal (struct) = TIntLiteral { 01: value (i64) = 1409702400, }, }, }, }, [1] = TExpr { 01: nodes (list) = list[1] { [0] = T I0123 01:10:48.751323 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 238.000ms I0123 01:10:48.905015 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:48.905135 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:49.244622 7181 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.UpdateCatalog from 10.153.201.19:36786 took 5s928ms I0123 01:10:49.906014 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:49.906244 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:50.426522 16373 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.16:60379) I0123 01:10:50.426750 16373 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = false, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "sa_issue_rating", }, } I0123 01:10:50.427325 16373 CatalogServiceCatalog.java:946] Invalidating table metadata: history.sa_issue_rating I0123 01:10:50.450816 16373 catalog-server.cc:83] ResetMetadata(): response=TResetMetadataResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43421, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 04: updated_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43421, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "sa_issue_rating", 04: id (i32) = 18767, }, }, }, } I0123 01:10:50.451565 16373 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ResetMetadata from 10.153.201.16:60379 took 25.000ms I0123 01:10:50.906651 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:50.906783 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:51.129035 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:51.906687 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:51.906836 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 1.000ms I0123 01:10:52.185312 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1s056ms I0123 01:10:52.509198 14317 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.19:35818) I0123 01:10:52.509328 14317 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "ph_dow_20170123_010826_bundle", }, 02: if_exists (bool) = false, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics@PHONEHOME.VMWARE.COM", }, } I0123 01:10:52.509714 14317 CatalogOpExecutor.java:1156] Dropping table/view staging.ph_dow_20170123_010826_bundle I0123 01:10:52.908007 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:52.908124 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:53.187836 14317 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 43422, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, 05: removed_catalog_object_DEPRECATED (struct) = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 43422, 05: table (struct) = TTable { 01: db_name (string) = "staging", 02: tbl_name (string) = "ph_dow_20170123_010826_bundle", }, }, }, } I0123 01:10:53.187958 14317 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.19:35818 took 679.000ms I0123 01:10:53.440361 7181 rpc-trace.cc:184] RPC call: CatalogService.ExecDdl(from 10.153.201.19:36786) I0123 01:10:53.441134 7181 catalog-server.cc:65] ExecDdl(): request=TDdlExecRequest { 01: protocol_version (i32) = 0, 02: ddl_type (i32) = 10, 11: drop_table_or_view_params (struct) = TDropTableOrViewParams { 01: table_name (struct) = TTableName { 01: db_name (string) = "staging", 02: table_name (string) = "ph_dow_20170123_010826_bundle", }, 02: if_exists (bool) = true, 03: purge (bool) = false, 04: is_table (bool) = true, }, 17: header (struct) = TCatalogServiceRequestHeader { 01: requesting_user (string) = "phanalytics@PHONEHOME.VMWARE.COM", }, } I0123 01:10:53.441828 7181 CatalogOpExecutor.java:1156] Dropping table/view staging.ph_dow_20170123_010826_bundle I0123 01:10:53.442328 7181 catalog-server.cc:71] ExecDdl(): response=TDdlExecResponse { 01: result (struct) = TCatalogUpdateResult { 01: catalog_service_id (struct) = TUniqueId { 01: hi (i64) = -6917924894998835703, 02: lo (i64) = -6656754584041086504, }, 02: version (i64) = 0, 03: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, }, } I0123 01:10:53.442504 7181 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.ExecDdl from 10.153.201.19:36786 took 2.000ms I0123 01:10:53.833331 11226 catalog-server.cc:316] Publishing update: TABLE:history.sa_issue_rating@43421 I0123 01:10:53.847482 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43421 I0123 01:10:53.908968 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:53.909132 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:54.185534 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:54.185745 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 1.000ms I0123 01:10:54.298207 11226 catalog-server.cc:316] Publishing update: CATALOG:9ffe97f5f2c34e09:a39e74f98d4721d8@43422 I0123 01:10:54.298452 11226 catalog-server.cc:335] Publishing deletion: TABLE:staging.ph_dow_20170123_010826_bundle I0123 01:10:54.524013 18966 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.15:48708) I0123 01:10:54.524334 18966 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "bundle", }, } I0123 01:10:54.525099 18966 CatalogServiceCatalog.java:836] Refreshing table metadata: history.bundle I0123 01:10:54.547880 18966 Table.java:161] Loading column stats for table: bundle I0123 01:10:54.582108 18966 Column.java:69] col stats: collection__fk #distinct=555701 I0123 01:10:54.582207 18966 Column.java:69] col stats: envelope_ts #distinct=5258683 I0123 01:10:54.582258 18966 Column.java:69] col stats: pa__arrival_ts #distinct=17700588 I0123 01:10:54.582310 18966 Column.java:69] col stats: pa__kafka_partition_offset #distinct=8213797 I0123 01:10:54.582449 18966 Column.java:69] col stats: pa__os_language #distinct=14 I0123 01:10:54.582548 18966 Column.java:69] col stats: pa__processed_ts #distinct=19070410 I0123 01:10:54.582622 18966 Column.java:69] col stats: pa__detected_proxy_sources #distinct=8 I0123 01:10:54.582679 18966 Column.java:69] col stats: pa__client_ip_path #distinct=111645 I0123 01:10:54.582799 18966 Column.java:69] col stats: pa__bundle__fk #distinct=23103018 I0123 01:10:54.582891 18966 Column.java:69] col stats: collector_instance_id #distinct=925201 I0123 01:10:54.582998 18966 Column.java:69] col stats: pa__kafka_partition #distinct=1 I0123 01:10:54.583062 18966 Column.java:69] col stats: pa__is_external #distinct=2 I0123 01:10:54.583113 18966 Column.java:69] col stats: pa__proxy_source #distinct=4 I0123 01:10:54.583163 18966 Column.java:69] col stats: size_in_bytes #distinct=322605 I0123 01:10:54.583212 18966 Column.java:69] col stats: id #distinct=23103018 I0123 01:10:54.583263 18966 Column.java:69] col stats: pa__collector_instance_id #distinct=1050297 I0123 01:10:54.583348 18966 Column.java:69] col stats: ext #distinct=2 I0123 01:10:54.583410 18966 Column.java:69] col stats: internal_id #distinct=10202831 I0123 01:10:54.583467 18966 HdfsTable.java:1038] incremental update for table: history.bundle I0123 01:10:54.583545 18966 HdfsTable.java:1103] sync table partitions: bundle I0123 01:10:54.909854 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:54.909947 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:55.065227 18966 HdfsTable.java:1416] loading file metadata for 18139 partitions I0123 01:10:55.911281 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:55.911396 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:56.185446 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:56.185643 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:56.911829 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:56.912025 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:57.186741 18634 rpc-trace.cc:184] RPC call: CatalogService.ResetMetadata(from 10.153.201.26:43550) I0123 01:10:57.187036 18634 catalog-server.cc:77] ResetMetadata(): request=TResetMetadataRequest { 01: protocol_version (i32) = 0, 02: is_refresh (bool) = true, 03: table_name (struct) = TTableName { 01: db_name (string) = "history", 02: table_name (string) = "bundle", }, 05: partition_spec (list) = list[3] { [0] = TPartitionKeyValue { 01: name (string) = "pa__schema_version", 02: value (string) = "1", }, [1] = TPartitionKeyValue { 01: name (string) = "pa__collector_id", 02: value (string) = "airwatch-admin-ui.1_0", }, [2] = TPartitionKeyValue { 01: name (string) = "pa__arrival_day", 02: value (string) = "1485129600", }, }, } I0123 01:10:57.911908 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:57.912088 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:58.186835 11231 rpc-trace.cc:184] RPC call: StatestoreSubscriber.UpdateState(from 10.153.201.11:51415) I0123 01:10:58.186976 11231 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.UpdateState from 10.153.201.11:51415 took 0.000ns I0123 01:10:58.912431 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:58.912603 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns I0123 01:10:59.020297 6590 rpc-trace.cc:184] RPC call: CatalogService.PrioritizeLoad(from 10.153.201.16:55088) I0123 01:10:59.020467 6590 catalog-server.cc:127] PrioritizeLoad(): request=TPrioritizeLoadRequest { 01: protocol_version (i32) = 0, 02: header (struct) = TCatalogServiceRequestHeader { }, 03: object_descs (list) = list[1] { [0] = TCatalogObject { 01: type (i32) = 3, 02: catalog_version (i64) = 0, 05: table (struct) = TTable { 01: db_name (string) = "history", 02: tbl_name (string) = "sa_issue_rating", }, }, }, } I0123 01:10:59.020642 6590 catalog-server.cc:133] PrioritizeLoad(): response=TPrioritizeLoadResponse { 01: status (struct) = TStatus { 01: status_code (i32) = 0, 02: error_msgs (list) = list[0] { }, }, } I0123 01:10:59.020892 6590 rpc-trace.cc:194] RPC call: catalog-server:CatalogService.PrioritizeLoad from 10.153.201.16:55088 took 1.000ms I0123 01:10:59.021311 11202 TableLoadingMgr.java:281] Loading next table. Remaining items in queue: 0 I0123 01:10:59.912748 11232 rpc-trace.cc:184] RPC call: StatestoreSubscriber.Heartbeat(from 10.153.201.11:51416) I0123 01:10:59.913107 11232 rpc-trace.cc:194] RPC call: statestore-subscriber:StatestoreSubscriber.Heartbeat from 10.153.201.11:51416 took 0.000ns