Details
-
Bug
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
Impala 2.3.0
-
None
-
None
Description
We observe an unresponsiveness in the catalog server when loading metadata for a multi table update. On this environmnet there are several tables that have large metadata sizes. No one table is large enough to breach the 2GB array size limit (as described in IMPALA-2648) but multi table updates may. This is aggravated by the fact that a large statestore_update_frequency is set (~60s) meaning fewer, larger updates.
Please note the catalog server does not crash, it hangs with the errors below reported:
I0620 09:04:48.926173 13941 jni-util.cc:166] java.lang.OutOfMemoryError: Requested array size exceeds VM limit at java.util.Arrays.copyOf(Arrays.java:2271) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140) at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145) at org.apache.thrift.protocol.TBinaryProtocol.writeI64(TBinaryProtocol.java:176) at com.cloudera.impala.thrift.THdfsFileBlock$THdfsFileBlockStandardScheme.write(THdfsFileBlock.java:809) at com.cloudera.impala.thrift.THdfsFileBlock$THdfsFileBlockStandardScheme.write(THdfsFileBlock.java:705) at com.cloudera.impala.thrift.THdfsFileBlock.write(THdfsFileBlock.java:624) at com.cloudera.impala.thrift.THdfsFileDesc$THdfsFileDescStandardScheme.write(THdfsFileDesc.java:792) at com.cloudera.impala.thrift.THdfsFileDesc$THdfsFileDescStandardScheme.write(THdfsFileDesc.java:686) at com.cloudera.impala.thrift.THdfsFileDesc.write(THdfsFileDesc.java:603) at com.cloudera.impala.thrift.THdfsPartition$THdfsPartitionStandardScheme.write(THdfsPartition.java:1785) at com.cloudera.impala.thrift.THdfsPartition$THdfsPartitionStandardScheme.write(THdfsPartition.java:1543) at com.cloudera.impala.thrift.THdfsPartition.write(THdfsPartition.java:1389) at com.cloudera.impala.thrift.THdfsTable$THdfsTableStandardScheme.write(THdfsTable.java:1123) at com.cloudera.impala.thrift.THdfsTable$THdfsTableStandardScheme.write(THdfsTable.java:969) at com.cloudera.impala.thrift.THdfsTable.write(THdfsTable.java:848) at com.cloudera.impala.thrift.TTable$TTableStandardScheme.write(TTable.java:1628) at com.cloudera.impala.thrift.TTable$TTableStandardScheme.write(TTable.java:1395) at com.cloudera.impala.thrift.TTable.write(TTable.java:1209) at com.cloudera.impala.thrift.TCatalogObject$TCatalogObjectStandardScheme.write(TCatalogObject.java:1241) at com.cloudera.impala.thrift.TCatalogObject$TCatalogObjectStandardScheme.write(TCatalogObject.java:1098) at com.cloudera.impala.thrift.TCatalogObject.write(TCatalogObject.java:938) at com.cloudera.impala.thrift.TGetAllCatalogObjectsResponse$TGetAllCatalogObjectsResponseStandardScheme.write(TGetAllCatalogObjectsResponse.java:487) at com.cloudera.impala.thrift.TGetAllCatalogObjectsResponse$TGetAllCatalogObjectsResponseStandardScheme.write(TGetAllCatalogObjectsResponse.java:421) at com.cloudera.impala.thrift.TGetAllCatalogObjectsResponse.write(TGetAllCatalogObjectsResponse.java:365) at org.apache.thrift.TSerializer.serialize(TSerializer.java:79) at com.cloudera.impala.service.JniCatalog.getCatalogObjects(JniCatalog.java:110) I0620 09:04:48.929128 13941 status.cc:112] OutOfMemoryError: Requested array size exceeds VM limit @ 0x7ae8f3 (unknown) @ 0xaa3625 (unknown) @ 0x79f998 (unknown) @ 0x785a3c (unknown) @ 0xade7aa (unknown) @ 0xae0a50 (unknown) @ 0xd283b3 (unknown) @ 0x7f3321eb8aa1 (unknown) @ 0x7f3320e15aad (unknown)
Please note that the effective array size limit is 1GB on this environment as per IMPALA-3961
Attachments
Issue Links
- relates to
-
IMPALA-2648 catalogd crashes when serialized messages are over 2 GB
- Resolved