Uploaded image for project: 'ZooKeeper'
  1. ZooKeeper
  2. ZOOKEEPER-1162

consistent handling of jute.maxbuffer when attempting to read large zk "directories"

    Details

    • Type: Improvement
    • Status: Open
    • Priority: Critical
    • Resolution: Unresolved
    • Affects Version/s: 3.3.3
    • Fix Version/s: 3.6.0, 3.5.5
    • Component/s: server
    • Labels:
      None

      Description

      Recently we encountered a sitaution where a zk directory got sucessfully populated with 250k elements. When our system attempted to read the znode dir, it failed because the contents of the dir exceeded the default 1mb jute.maxbuffer limit. There were a few odd things

      1) It seems odd that we could populate to be very large but could not read the listing
      2) The workaround was bumping up jute.maxbuffer on the client side
      Would it make more sense to have it reject adding new znodes if it exceeds jute.maxbuffer?
      Alternately, would it make sense to have zk dir listing ignore the jute.maxbuffer setting?

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                hanm Michael Han
                Reporter:
                jmhsieh Jonathan Hsieh
              • Votes:
                12 Vote for this issue
                Watchers:
                24 Start watching this issue

                Dates

                • Created:
                  Updated: