Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.4.3
-
None
-
None
-
CentOS 6.4 running WebSphere Application Server 7.0.0.19. Jackrabbit cluster configuration with 2 WAS servers. Repository on DB2 9.7.
Description
In our performance analysis, we are seeing a strange effect, which we does not make sense to us. It may or may not be a defect, but we need to understand why the effect occurs. In a 2 node cluster, we can run a certain load (reading and writing) directly on Node1 and an equivalent load (reading and writing on Node2). We measure the response time on both nodes, and it's less than 2 seconds. If we stop the load to one of the servers, the response time on the other server triples (with no additional load). See attached image "JackrabbitCluster-ResponseTime.png". The left side of the report shows when only one node (Node1) has load and Node2 has no load. In this case, the response times on Node1 are at about 6 seconds. Then, on the right side of the report, we add an equivalent load to Node2 and then the response times on Node1 drop to 2 seconds. So, the load on Node1 was always consistent, yet ADDING load to Node2 actually improves response time on Node1. Logically, it doesn't make much sense, eh? Someone, please, at least help us understand why this may be happening.