ZooKeeper
  1. ZooKeeper
  2. ZOOKEEPER-706

large numbers of watches can cause session re-establishment to fail

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Critical Critical
    • Resolution: Fixed
    • Affects Version/s: 3.1.2, 3.2.2, 3.3.0
    • Fix Version/s: 3.4.7, 3.5.2, 3.6.0
    • Component/s: c client, java client
    • Labels:
      None

      Description

      If a client sets a large number of watches the "set watches" operation during session re-establishment can fail.

      for example:
      WARN [NIOServerCxn.Factory:22801:NIOServerCnxn@417] - Exception causing close of session 0xe727001201a4ee7c due to java.io.IOException: Len error 4348380

      in this case the client was a web monitoring app and had set both data and child watches on > 32k znodes.

      there are two issues I see here we need to fix:

      1) handle this case properly (split up the set watches into multiple calls I guess...)
      2) the session should have expired after the "timeout". however we seem to consider any message from the client as re-setting the expiration on the server side. Probably we should only consider messages from the client that are sent during an established session, otherwise we can see this situation where the session is not established however the session is not expired either. Perhaps we should create another JIRA for this particular issue.

      1. ZOOKEEPER-706.patch
        8 kB
        Chris Thunes
      2. ZOOKEEPER-706.patch
        8 kB
        Chris Thunes
      3. ZOOKEEPER-706.patch
        5 kB
        Chris Thunes
      4. ZOOKEEPER-706-branch-34.patch
        9 kB
        Chris Thunes
      5. ZOOKEEPER-706-branch-34.patch
        9 kB
        Chris Thunes

        Issue Links

          Activity

          Hide
          Patrick Hunt added a comment -

          This is a pretty easy one for someone to fix, and as more users use more watches it would be good to get this addressed.

          Show
          Patrick Hunt added a comment - This is a pretty easy one for someone to fix, and as more users use more watches it would be good to get this addressed.
          Hide
          Mahadev konar added a comment -

          not a blocker. Moving it out of 3.4 release.

          Show
          Mahadev konar added a comment - not a blocker. Moving it out of 3.4 release.
          Hide
          Eric Hwang added a comment -

          I am seeing this issue coming up quite a bit right now. Some clients are getting continually disconnected/reconnected with that error message from what I assume is the large number of watches. For example:

          2011-08-29 13:26:43,750 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32866
          2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32866
          2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32866
          2011-08-29 13:26:43,775 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32866 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:26:47,276 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32871
          2011-08-29 13:26:47,298 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32871
          2011-08-29 13:26:47,298 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32871
          2011-08-29 13:26:47,298 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:47,300 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32871 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:26:51,124 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32879
          2011-08-29 13:26:51,142 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32879
          2011-08-29 13:26:51,143 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32879
          2011-08-29 13:26:51,143 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:51,143 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32879 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:26:53,985 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32885
          2011-08-29 13:26:54,006 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32885
          2011-08-29 13:26:54,007 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32885
          2011-08-29 13:26:54,007 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:54,007 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32885 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:26:57,495 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32892
          2011-08-29 13:26:57,513 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32892
          2011-08-29 13:26:57,513 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32892
          2011-08-29 13:26:57,514 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:57,514 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32892 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:26:59,937 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32897
          2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32897
          2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32897
          2011-08-29 13:26:59,958 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247
          2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32897 which had sessionid 0x1319819fcd1000b
          2011-08-29 13:27:03,711 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32904

          It would be great to get this fixed in an upcoming release since it is impacting us quite a bit.

          Show
          Eric Hwang added a comment - I am seeing this issue coming up quite a bit right now. Some clients are getting continually disconnected/reconnected with that error message from what I assume is the large number of watches. For example: 2011-08-29 13:26:43,750 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32866 2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32866 2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32866 2011-08-29 13:26:43,775 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:43,775 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32866 which had sessionid 0x1319819fcd1000b 2011-08-29 13:26:47,276 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32871 2011-08-29 13:26:47,298 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32871 2011-08-29 13:26:47,298 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32871 2011-08-29 13:26:47,298 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:47,300 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32871 which had sessionid 0x1319819fcd1000b 2011-08-29 13:26:51,124 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32879 2011-08-29 13:26:51,142 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32879 2011-08-29 13:26:51,143 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32879 2011-08-29 13:26:51,143 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:51,143 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32879 which had sessionid 0x1319819fcd1000b 2011-08-29 13:26:53,985 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32885 2011-08-29 13:26:54,006 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32885 2011-08-29 13:26:54,007 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32885 2011-08-29 13:26:54,007 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:54,007 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32885 which had sessionid 0x1319819fcd1000b 2011-08-29 13:26:57,495 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32892 2011-08-29 13:26:57,513 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32892 2011-08-29 13:26:57,513 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32892 2011-08-29 13:26:57,514 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:57,514 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32892 which had sessionid 0x1319819fcd1000b 2011-08-29 13:26:59,937 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32897 2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@770] - Client attempting to renew session 0x1319819fcd1000b at /10.141.241.188:32897 2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1580] - Established session 0x1319819fcd1000b with negotiated timeout 10000 for client /10.141.241.188:32897 2011-08-29 13:26:59,958 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@639] - Exception causing close of session 0x1319819fcd1000b due to java.io.IOException: Len error 1150247 2011-08-29 13:26:59,958 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1435] - Closed socket connection for client /10.141.241.188:32897 which had sessionid 0x1319819fcd1000b 2011-08-29 13:27:03,711 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@251] - Accepted socket connection from /10.141.241.188:32904 It would be great to get this fixed in an upcoming release since it is impacting us quite a bit.
          Hide
          Patrick Hunt added a comment -

          You might be seeing ZOOKEEPER-1162 – see if the workaround there fixes your problem.

          Show
          Patrick Hunt added a comment - You might be seeing ZOOKEEPER-1162 – see if the workaround there fixes your problem.
          Hide
          Eric Hwang added a comment -

          Any idea if the jute.maxbuffer setting needs to be applied to both server and client? or just client?

          Show
          Eric Hwang added a comment - Any idea if the jute.maxbuffer setting needs to be applied to both server and client? or just client?
          Hide
          Eric Hwang added a comment -

          Thanks Patrick, setting jute.maxbuffer size to a larger value in the server and client seems to have fixed the problem.

          Show
          Eric Hwang added a comment - Thanks Patrick, setting jute.maxbuffer size to a larger value in the server and client seems to have fixed the problem.
          Hide
          Patrick Hunt added a comment -

          Good to know, thanks Eric. Even more reason to address ZOOKEEPER-1162

          Show
          Patrick Hunt added a comment - Good to know, thanks Eric. Even more reason to address ZOOKEEPER-1162
          Hide
          Chun Chen added a comment -

          I think not only large number of watches might cause the error, but also too many ops in a single call of ZooKeeper#multi. I'm seeing the following error
          RM log:

          2015-01-20 09:45:45,464 WARN org.apache.zookeeper.ClientCnxn: Session 0x24aeb334e8e000d for server c112/10.149.27.112:2181, unexpected error, closing socket connection and attempting reconn
          ect
          java.io.IOException: Broken pipe
                  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
                  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
                  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
                  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
                  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
                  at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
                  at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355)
                  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
          2015-01-20 09:45:45,564 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
          org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
                  at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
                  at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
                  at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:895)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:892)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1031)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1050)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:892)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:906)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:915)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeRMDTMasterKeyState(ZKRMStateStore.java:785)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.storeRMDTMasterKey(RMStateStore.java:678)
                  at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.storeNewMasterKey(RMDelegationTokenSecretManager.java:86)
                  at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.updateCurrentKey(AbstractDelegationTokenSecretManager.java:233)
                  at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.startThreads(AbstractDelegationTokenSecretManager.java:116)
                  at org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService.serviceStart(RMSecretManagerService.java:83)
                  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
                  at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
                  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:514)
                  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
                  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:847)
                  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$2.run(ResourceManager.java:888)
                  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$2.run(ResourceManager.java:884)
                  at java.security.AccessController.doPrivileged(Native Method)
                  at javax.security.auth.Subject.doAs(Subject.java:422)
                  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667)
                  at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:884)
                  at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:275)
                  at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:116)
                  at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804)
                  at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415)
                  at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:596)
          
          
          2015-01-20 09:59:49,665 WARN org.apache.zookeeper.ClientCnxn: Session 0x34aeb34ba360012 for server c114/10.149.27.114:2181, unexpected error, closing socket connection and attempting reconnect
          java.io.IOException: Broken pipe
                  at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
                  at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
                  at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
                  at sun.nio.ch.IOUtil.write(IOUtil.java:65)
                  at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
                  at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
                  at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355)
                  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
          2015-01-20 09:59:49,765 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
          org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
                  at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
                  at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
                  at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:895)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:892)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1031)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1050)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:892)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeApplicationStateInternal(ZKRMStateStore.java:678)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$RemoveAppTransition.transition(RMStateStore.java:184)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$RemoveAppTransition.transition(RMStateStore.java:170)
                  at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
                  at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
                  at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
                  at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:769)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:842)
                  at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:837)
                  at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:191)
                  at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:124)
                  at java.lang.Thread.run(Thread.java:745)
          

          zk log

          2015-01-20 09:45:46,377 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.149.27.114:33517
          2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@861] - Client attempting to renew session 0x24aeb334e8e000d at /10.149.27.114:33517
          2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@617] - Established session 0x24aeb334e8e000d with negotiated timeout 10000 for client /10.149.27.114:33517
          2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@892] - got auth packet /10.149.27.114:33517
          2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@926] - auth success /10.149.27.114:33517
          2015-01-20 09:45:46,481 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x24aeb334e8e000d due to java.io.IOException: Len error 9271869
          2015-01-20 09:45:46,481 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /10.149.27.114:33517 which had sessionid 0x24aeb334e8e000d
          
          Show
          Chun Chen added a comment - I think not only large number of watches might cause the error, but also too many ops in a single call of ZooKeeper#multi. I'm seeing the following error RM log: 2015-01-20 09:45:45,464 WARN org.apache.zookeeper.ClientCnxn: Session 0x24aeb334e8e000d for server c112/10.149.27.112:2181, unexpected error, closing socket connection and attempting reconn ect java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) 2015-01-20 09:45:45,564 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation. org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931) at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:895) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:892) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1031) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1050) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:892) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:906) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:915) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeRMDTMasterKeyState(ZKRMStateStore.java:785) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.storeRMDTMasterKey(RMStateStore.java:678) at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.storeNewMasterKey(RMDelegationTokenSecretManager.java:86) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.updateCurrentKey(AbstractDelegationTokenSecretManager.java:233) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.startThreads(AbstractDelegationTokenSecretManager.java:116) at org.apache.hadoop.yarn.server.resourcemanager.RMSecretManagerService.serviceStart(RMSecretManagerService.java:83) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:514) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:847) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$2.run(ResourceManager.java:888) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$2.run(ResourceManager.java:884) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1667) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:884) at org.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:275) at org.apache.hadoop.yarn.server.resourcemanager.EmbeddedElectorService.becomeActive(EmbeddedElectorService.java:116) at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:804) at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:415) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:596) 2015-01-20 09:59:49,665 WARN org.apache.zookeeper.ClientCnxn: Session 0x34aeb34ba360012 for server c114/10.149.27.114:2181, unexpected error, closing socket connection and attempting reconnect java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068) 2015-01-20 09:59:49,765 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation. org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931) at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:895) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:892) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1031) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1050) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:892) at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeApplicationStateInternal(ZKRMStateStore.java:678) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$RemoveAppTransition.transition(RMStateStore.java:184) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$RemoveAppTransition.transition(RMStateStore.java:170) at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:769) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:842) at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:837) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:191) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:124) at java.lang. Thread .run( Thread .java:745) zk log 2015-01-20 09:45:46,377 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.149.27.114:33517 2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@861] - Client attempting to renew session 0x24aeb334e8e000d at /10.149.27.114:33517 2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@617] - Established session 0x24aeb334e8e000d with negotiated timeout 10000 for client /10.149.27.114:33517 2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@892] - got auth packet /10.149.27.114:33517 2015-01-20 09:45:46,381 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@926] - auth success /10.149.27.114:33517 2015-01-20 09:45:46,481 [myid:2] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x24aeb334e8e000d due to java.io.IOException: Len error 9271869 2015-01-20 09:45:46,481 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /10.149.27.114:33517 which had sessionid 0x24aeb334e8e000d
          Hide
          Chris Thunes added a comment -

          I would like to submit the attached patch for review. This change handles splitting the SetWatches call after session re-establishment into multiple smaller requests to avoid hitting the maxBuffer limit of the server.

          Show
          Chris Thunes added a comment - I would like to submit the attached patch for review. This change handles splitting the SetWatches call after session re-establishment into multiple smaller requests to avoid hitting the maxBuffer limit of the server.
          Hide
          Raul Gutierrez Segales added a comment -

          lgtm, +1.

          Thanks for the patch Chris Thunes! Do you think you could add a test? I think we could go without one, but given the assumption has always been there's only one SetWatches() packet after reconnect, it might be good to have a small test for this.

          Show
          Raul Gutierrez Segales added a comment - lgtm, +1. Thanks for the patch Chris Thunes ! Do you think you could add a test? I think we could go without one, but given the assumption has always been there's only one SetWatches() packet after reconnect, it might be good to have a small test for this.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12739232/ZOOKEEPER-706.patch
          against trunk revision 1684956.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2765//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12739232/ZOOKEEPER-706.patch against trunk revision 1684956. +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2765//console This message is automatically generated.
          Hide
          Raul Gutierrez Segales added a comment -

          Ah, also: the patch you attached only applies to the 3.4 branch. Do you mind renaming that patch to ZOOKEEPER-706-branch-34.patch and also submitting one that applies to trunk (which would be ZOOKEEPER-706.patch)? Thanks!

          Show
          Raul Gutierrez Segales added a comment - Ah, also: the patch you attached only applies to the 3.4 branch. Do you mind renaming that patch to ZOOKEEPER-706 -branch-34.patch and also submitting one that applies to trunk (which would be ZOOKEEPER-706 .patch)? Thanks!
          Hide
          Raul Gutierrez Segales added a comment -

          yeah, this is because the patch is against the 3.4 branch.

          Show
          Raul Gutierrez Segales added a comment - yeah, this is because the patch is against the 3.4 branch.
          Hide
          Chris Thunes added a comment -

          Sure, let me see if I can put something together. It looks like org.apache.zookeeper.test.DisconnectedWatcherTest would an appropriate place for one. Does that seem correct?

          Show
          Chris Thunes added a comment - Sure, let me see if I can put something together. It looks like org.apache.zookeeper.test.DisconnectedWatcherTest would an appropriate place for one. Does that seem correct?
          Hide
          Chris Thunes added a comment -

          Will do.

          Show
          Chris Thunes added a comment - Will do.
          Hide
          Raul Gutierrez Segales added a comment -

          Yeah, I think that makes sense. Something like setting 10K watches on /some/100/chars/path maybe?

          Show
          Raul Gutierrez Segales added a comment - Yeah, I think that makes sense. Something like setting 10K watches on /some/100/chars/path maybe?
          Hide
          Chris Thunes added a comment -

          I've add a test and attached the updated patches for both the 3.4 branch and trunk.

          Show
          Chris Thunes added a comment - I've add a test and attached the updated patches for both the 3.4 branch and trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12739317/ZOOKEEPER-706.patch
          against trunk revision 1684956.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12739317/ZOOKEEPER-706.patch against trunk revision 1684956. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2766//console This message is automatically generated.
          Hide
          Raul Gutierrez Segales added a comment -

          We can ignore the failed test:

          1 tests failed.
          REGRESSION:  org.apache.zookeeper.test.ReconfigTest.testPortChange
          

          cc: Alexander Shraer

          Show
          Raul Gutierrez Segales added a comment - We can ignore the failed test: 1 tests failed. REGRESSION: org.apache.zookeeper.test.ReconfigTest.testPortChange cc: Alexander Shraer
          Hide
          Raul Gutierrez Segales added a comment -

          We can ignore the failed test:

          1 tests failed.
          REGRESSION:  org.apache.zookeeper.test.ReconfigTest.testPortChange
          

          cc: Alexander Shraer

          Show
          Raul Gutierrez Segales added a comment - We can ignore the failed test: 1 tests failed. REGRESSION: org.apache.zookeeper.test.ReconfigTest.testPortChange cc: Alexander Shraer
          Hide
          Raul Gutierrez Segales added a comment -

          So chatting with Santeri (Santtu) Voutilainen about this, we think there is an issue in:

          +                        SetWatches sw = new SetWatches(lastZxid,
          +                                                       dataWatchesBatch,
          +                                                       existWatchesBatch,
          +                                                       childWatchesBatch);
          

          The lastZxid should be the same for all SetWatches packets. Given it can change while you are looping (because a watch could have fired, and triggered a read), that would cause the client to claim that it's further ahead than it really is.

          So we should save lastZxid before we start sending the SetWatches packets.

          Show
          Raul Gutierrez Segales added a comment - So chatting with Santeri (Santtu) Voutilainen about this, we think there is an issue in: + SetWatches sw = new SetWatches(lastZxid, + dataWatchesBatch, + existWatchesBatch, + childWatchesBatch); The lastZxid should be the same for all SetWatches packets. Given it can change while you are looping (because a watch could have fired, and triggered a read), that would cause the client to claim that it's further ahead than it really is. So we should save lastZxid before we start sending the SetWatches packets.
          Hide
          Raul Gutierrez Segales added a comment -

          Small nit, in:

          +            String path = zk1.create(pathBase + "/ch-" + i, null, Ids.OPEN_ACL_UNSAFE,
          +                                     CreateMode.PERSISTENT);
          +            paths.add(path);
          

          given that you are saving the path, you could just use CreateMode.PERSISTENT_SEQUENTIAL (to get a unique path) and no need to + i.

          Other than that and the lastZxid issue, I think it looks good. Thanks!

          Show
          Raul Gutierrez Segales added a comment - Small nit, in: + String path = zk1.create(pathBase + "/ch-" + i, null , Ids.OPEN_ACL_UNSAFE, + CreateMode.PERSISTENT); + paths.add(path); given that you are saving the path, you could just use CreateMode.PERSISTENT_SEQUENTIAL (to get a unique path) and no need to + i. Other than that and the lastZxid issue, I think it looks good. Thanks!
          Hide
          Chris Thunes added a comment -

          Ah, good catch. I'll fix that up.

          Show
          Chris Thunes added a comment - Ah, good catch. I'll fix that up.
          Hide
          Chris Thunes added a comment -

          Okay, these should address the zxid issue and the PERSISTENT_SEQUENTIAL issue.

          Show
          Chris Thunes added a comment - Okay, these should address the zxid issue and the PERSISTENT_SEQUENTIAL issue.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12739332/ZOOKEEPER-706-branch-34.patch
          against trunk revision 1685167.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2767//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12739332/ZOOKEEPER-706-branch-34.patch against trunk revision 1685167. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 3 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2767//console This message is automatically generated.
          Show
          Raul Gutierrez Segales added a comment - Merged: trunk https://github.com/apache/zookeeper/commit/aea212fb4cea300dd1eb213541f60352246a74d6 3.5 https://github.com/apache/zookeeper/commit/66532c2b43daf96e769a8ed6cf217808d2c621ec 3.4 https://github.com/apache/zookeeper/commit/05f17a04e5ef261abae19b08ef138f62febfb188 thanks Chris Thunes !
          Hide
          Chris Thunes added a comment -

          Happy to contribute and thanks for the feedback and quick turn around!

          Show
          Chris Thunes added a comment - Happy to contribute and thanks for the feedback and quick turn around!
          Hide
          Hudson added a comment -

          FAILURE: Integrated in ZooKeeper-trunk #2725 (See https://builds.apache.org/job/ZooKeeper-trunk/2725/)
          ZOOKEEPER-706: Large numbers of watches can cause session re-establishment to fail
          (Chris Thunes via rgs) (rgs: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1685200)

          • /zookeeper/trunk/CHANGES.txt
          • /zookeeper/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java
          • /zookeeper/trunk/src/java/test/org/apache/zookeeper/test/DisconnectedWatcherTest.java
          Show
          Hudson added a comment - FAILURE: Integrated in ZooKeeper-trunk #2725 (See https://builds.apache.org/job/ZooKeeper-trunk/2725/ ) ZOOKEEPER-706 : Large numbers of watches can cause session re-establishment to fail (Chris Thunes via rgs) (rgs: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1685200 ) /zookeeper/trunk/CHANGES.txt /zookeeper/trunk/src/java/main/org/apache/zookeeper/ClientCnxn.java /zookeeper/trunk/src/java/test/org/apache/zookeeper/test/DisconnectedWatcherTest.java

            People

            • Assignee:
              Unassigned
              Reporter:
              Patrick Hunt
            • Votes:
              7 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development