Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.3.3
-
None
-
None
Description
Recently we encountered a sitaution where a zk directory got sucessfully populated with 250k elements. When our system attempted to read the znode dir, it failed because the contents of the dir exceeded the default 1mb jute.maxbuffer limit. There were a few odd things
1) It seems odd that we could populate to be very large but could not read the listing
2) The workaround was bumping up jute.maxbuffer on the client side
Would it make more sense to have it reject adding new znodes if it exceeds jute.maxbuffer?
Alternately, would it make sense to have zk dir listing ignore the jute.maxbuffer setting?
Attachments
Issue Links
- blocks
-
HBASE-14938 Limit the number of znodes for ZK in bulk loaded hfile replication
- Closed
- breaks
-
ZOOKEEPER-706 large numbers of watches can cause session re-establishment to fail
- Closed
-
HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios
- Closed
- is duplicated by
-
ZOOKEEPER-4332 Cannot access children of znode that owns too many znodes
- Open
- is related to
-
ZOOKEEPER-2260 Paginated getChildren call
- Patch Available
- relates to
-
ZOOKEEPER-4314 Can not get real exception when getChildren more than 4M
- Open