Details
Description
When performing a get operation on a column family with more than 64MB of data, the operation fails with:
Caused by: Portable(java.io.IOException): Call to host:port failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit.
at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but that issue is related to cluster status.
Scan and put operations on the same data work fine
Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.
Attachments
Attachments
Issue Links
- is related to
-
HBASE-11747 ClusterStatus (heartbeat) is too bulky
- Open
- relates to
-
HBASE-11747 ClusterStatus (heartbeat) is too bulky
- Open
-
BEAM-10464 [HBaseIO] - Protocol message was too large. May be malicious.
- Resolved