HBase
  1. HBase
  2. HBASE-11297

Remove some synchros in the rpcServer responder

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.99.0
    • Fix Version/s: 0.99.0, 2.0.0
    • Component/s: regionserver
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      This is on top of another patch that I'm going to put into another jira.

      1. 11297.v1.patch
        10 kB
        Nicolas Liochon
      2. 11297.v2.patch
        42 kB
        Nicolas Liochon
      3. 11297.v2.v98.patch
        38 kB
        Nicolas Liochon
      4. 11297.v3.patch
        43 kB
        Nicolas Liochon

        Issue Links

          Activity

          Hide
          Nicolas Liochon added a comment -

          v1. Tests are in progress.

          Show
          Nicolas Liochon added a comment - v1. Tests are in progress.
          Hide
          Andrew Purtell added a comment -

          Please consider 0.98 also. Or I could do a back port.

          Show
          Andrew Purtell added a comment - Please consider 0.98 also. Or I could do a back port.
          Hide
          Andrew Purtell added a comment -

          Let me try HBASE-11298 and this under load

          Show
          Andrew Purtell added a comment - Let me try HBASE-11298 and this under load
          Hide
          stack added a comment -

          Can you write up high-level what has been changed? There is code movement and I see the queue now is unsynchronized – is this going to be safe? – but what else? A few high level notes would help us review. Thanks Nicolas Liochon

          Show
          stack added a comment - Can you write up high-level what has been changed? There is code movement and I see the queue now is unsynchronized – is this going to be safe? – but what else? A few high level notes would help us review. Thanks Nicolas Liochon
          Hide
          Nicolas Liochon added a comment -

          The real changes are only on the doRespond path.
          There are some changes in purge, but the behavior should not change there: the documentation was saying that were discarding the calls, but the implementation was closing the connection. It's not really safe to do something different anyway: imagine a half-replied call, you can't discard it.

          for the doRespond, instead of locking when we wanted to add a call to the list, we lock only if we want to write on the channel. We also to a tryLock to return immediately, so the handler does not wait.

          The test are still running here (I've done multiple versions, the previous versions were ok, but...). I haven't yet ran the perf tests. It targets the scenario with multiple clients on the same connection. It's more or less a first step for having more Responders (depending on the test results). I will do another deep review, but I wanted to show what I was working on.

          Show
          Nicolas Liochon added a comment - The real changes are only on the doRespond path. There are some changes in purge, but the behavior should not change there: the documentation was saying that were discarding the calls, but the implementation was closing the connection. It's not really safe to do something different anyway: imagine a half-replied call, you can't discard it. for the doRespond, instead of locking when we wanted to add a call to the list, we lock only if we want to write on the channel. We also to a tryLock to return immediately, so the handler does not wait. The test are still running here (I've done multiple versions, the previous versions were ok, but...). I haven't yet ran the perf tests. It targets the scenario with multiple clients on the same connection. It's more or less a first step for having more Responders (depending on the test results). I will do another deep review, but I wanted to show what I was working on.
          Hide
          Liang Xie added a comment -

          will look at the patch, since i observed this

          synchronized (call.connection.responseQueue) 
          

          hotspot during one of my read-only testing

          Show
          Liang Xie added a comment - will look at the patch, since i observed this synchronized (call.connection.responseQueue) hotspot during one of my read-only testing
          Hide
          Nicolas Liochon added a comment -

          Hum, it's difficult to have something obviously better. The attach patch seems to bring some improvements. It's very difficult to test. Some points:

          • with the current code, if we stay under the socket buffer size, we just queue on the lock. The responder is not really involved. That's also why we're seeing the lock on responseQueue. When the buffer size is reached, then the behavior changes. This depends a lot on the network or client (mis)behavior. The patch changes this.
          • I've used sudo sysctl net.ipv4.tcp_wmem="1024 1024 1024" to change the buffer size and test the correctness.
          • the performances are very sensitive to warming.
          • I've also changed PerformanceEvaluation to share the connection.

          I haven't run all the unit test, I'm going to use hadoop qa for this...

          Show
          Nicolas Liochon added a comment - Hum, it's difficult to have something obviously better. The attach patch seems to bring some improvements. It's very difficult to test. Some points: with the current code, if we stay under the socket buffer size, we just queue on the lock. The responder is not really involved. That's also why we're seeing the lock on responseQueue. When the buffer size is reached, then the behavior changes. This depends a lot on the network or client (mis)behavior. The patch changes this. I've used sudo sysctl net.ipv4.tcp_wmem="1024 1024 1024" to change the buffer size and test the correctness. the performances are very sensitive to warming. I've also changed PerformanceEvaluation to share the connection. I haven't run all the unit test, I'm going to use hadoop qa for this...
          Hide
          Nicolas Liochon added a comment - - edited

          feedback welcome on v2, so. It includes HBASE-11298, I will do another version once HBASE-11298 is committed (I will do that tomorrow).

          Show
          Nicolas Liochon added a comment - - edited feedback welcome on v2, so. It includes HBASE-11298 , I will do another version once HBASE-11298 is committed (I will do that tomorrow).
          Hide
          Andrew Purtell added a comment -

          If you can and are willing to put up 0.98 patches along with trunk versions, I can drop modified server jars into the testbed I have set up for 0.98 perf tests.

          Show
          Andrew Purtell added a comment - If you can and are willing to put up 0.98 patches along with trunk versions, I can drop modified server jars into the testbed I have set up for 0.98 perf tests.
          Hide
          Nicolas Liochon added a comment -

          Thanks a lot Andrew. I'm doing the patch.

          Show
          Nicolas Liochon added a comment - Thanks a lot Andrew. I'm doing the patch.
          Hide
          Nicolas Liochon added a comment -

          Here it is. Basic tests seems to say it's ok (small tests + TestHCM + random mediums).

          Show
          Nicolas Liochon added a comment - Here it is. Basic tests seems to say it's ok (small tests + TestHCM + random mediums).
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12649416/11297.v2.v98.patch
          against trunk revision .
          ATTACHMENT ID: 12649416

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9727//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12649416/11297.v2.v98.patch against trunk revision . ATTACHMENT ID: 12649416 +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified tests. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9727//console This message is automatically generated.
          Hide
          Liang Xie added a comment -

          yeh, confirmed no obvious benefit from another type testing.
          In my previous testing which observed this doRespond hotspot, i employed single YCSB process with 64 threads. and saw lots of thread blocked at doRespond
          then i reran with 64 YCSB process with 1 thread, those blocking had gone away, but still no obvious throughput improvement be found.

          Show
          Liang Xie added a comment - yeh, confirmed no obvious benefit from another type testing. In my previous testing which observed this doRespond hotspot, i employed single YCSB process with 64 threads. and saw lots of thread blocked at doRespond then i reran with 64 YCSB process with 1 thread, those blocking had gone away, but still no obvious throughput improvement be found.
          Hide
          Nicolas Liochon added a comment -

          Thanks Liang. What was your test exactly? 64 YCSB clients sharing a single connection?

          Show
          Nicolas Liochon added a comment - Thanks Liang. What was your test exactly? 64 YCSB clients sharing a single connection?
          Hide
          Liang Xie added a comment -

          no sharing. i just want to see w/o that doRespond hotspot(it was confirm also from the thread dumps that no thread waiting on doRespond()), is there any throughput improvement or not

          Show
          Liang Xie added a comment - no sharing. i just want to see w/o that doRespond hotspot(it was confirm also from the thread dumps that no thread waiting on doRespond()), is there any throughput improvement or not
          Hide
          Nicolas Liochon added a comment -

          If the network is the bottleneck, we're not going to be much faster . However, we're freeing the handlers sooner w/ this patch so it can help with some workloads. I'm going to do a try w/o the writes in the handlers at all. Previously it was slower, but I improved the code around the nio stuff, so maybe we're better now.

          Show
          Nicolas Liochon added a comment - If the network is the bottleneck, we're not going to be much faster . However, we're freeing the handlers sooner w/ this patch so it can help with some workloads. I'm going to do a try w/o the writes in the handlers at all. Previously it was slower, but I improved the code around the nio stuff, so maybe we're better now.
          Hide
          Andrew Purtell added a comment -

          Thanks for the 98 patch. I am out of the office today and tomorrow, back Thursday, will run the usual YCSB tests for 98 with the patch applied then.

          Show
          Andrew Purtell added a comment - Thanks for the 98 patch. I am out of the office today and tomorrow, back Thursday, will run the usual YCSB tests for 98 with the patch applied then.
          Hide
          Nicolas Liochon added a comment -

          I'm nearly ready to commit the v3. It includes HBASE-11298.
          I've done very little changes vs. v2, I will be running some performance tests today to validate them (medium unit tests ran ok already).

          Andrew Purtell, did you have time to try the v2 patch on your cluster?

          Show
          Nicolas Liochon added a comment - I'm nearly ready to commit the v3. It includes HBASE-11298 . I've done very little changes vs. v2, I will be running some performance tests today to validate them (medium unit tests ran ok already). Andrew Purtell , did you have time to try the v2 patch on your cluster?
          Hide
          Andrew Purtell added a comment -

          Andrew Purtell, did you have time to try the v2 patch on your cluster?

          No, I got sidetracked. Starting now.

          Show
          Andrew Purtell added a comment - Andrew Purtell, did you have time to try the v2 patch on your cluster? No, I got sidetracked. Starting now.
          Hide
          Andrew Purtell added a comment -

          I've collected before and after data for all YCSB workloads using the typical scenario. The patch was stable under 100k ops/sec load. Will process the data a bit later.

          Show
          Andrew Purtell added a comment - I've collected before and after data for all YCSB workloads using the typical scenario. The patch was stable under 100k ops/sec load. Will process the data a bit later.
          Hide
          Nicolas Liochon added a comment -

          I've tested the v3 on master. Still around 3 to 5% faster on my test (read with a single client, 1 to 16 threads), so if Andrew's tests are ok I will commit this.

          Show
          Nicolas Liochon added a comment - I've tested the v3 on master. Still around 3 to 5% faster on my test (read with a single client, 1 to 16 threads), so if Andrew's tests are ok I will commit this.
          Hide
          Andrew Purtell added a comment - - edited

          I measure a small trade of latency for throughput on most workloads. I piggybacked this on something else so the test was run on 8u20 b17 . I would expect very similar results with 7u60.

          Workload A 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 100996 100738
          OVERALL Throughput(ops/sec) 99013 99267
          READ Operations 4999690 5000163
          READ AverageLatency(us) 651.6789 651.97
          READ MinLatency(us) 255 260
          READ MaxLatency(us) 655882 651083
          READ 95thPercentileLatency(ms) 0 0
          READ 99thPercentileLatency(ms) 4 5
          UPDATE Operations 5000470 4999996
          UPDATE AverageLatency(us) 32.80 27.25
          UPDATE MinLatency(us) 0 0
          UPDATE MaxLatency(us) 652182 687377
          UPDATE 95thPercentileLatency(ms) 0 0
          UPDATE 99thPercentileLatency(ms) 0 0


          Workload B 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 100536 100480
          OVERALL Throughput(ops/sec) 99466 99521
          READ Operations 9499944 9499498
          READ AverageLatency(us) 599.54 653.32
          READ MinLatency(us) 259 271
          READ MaxLatency(us) 710357 727001
          READ 95thPercentileLatency(ms) 0 1
          READ 99thPercentileLatency(ms) 3 4
          UPDATE Operations 500216 500661
          UPDATE AverageLatency(us) 119.18 145.18
          UPDATE MinLatency(us) 0 0
          UPDATE MaxLatency(us) 664203 666895
          UPDATE 95thPercentileLatency(ms) 0 0
          UPDATE 99thPercentileLatency(ms) 0 0


          Workload C 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 100021 100021
          OVERALL Throughput(ops/sec) 99979 99979
          READ Operations 1000000 1000000
          READ AverageLatency(us) 543.22 550.31
          READ MinLatency(us) 256 259
          READ MaxLatency(us) 735497 721836
          READ 95thPercentileLatency(ms) 0 0
          READ 99thPercentileLatency(ms) 4 4


          Workload D 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 103916 103250
          OVERALL Throughput(ops/sec) 96237 96854
          READ Operations 9500043 9499220
          READ AverageLatency(us) 662.044 674.25
          READ MinLatency(us) 262 261
          READ MaxLatency(us) 1054555 742158
          READ 95thPercentileLatency(ms) 1 0
          READ 99thPercentileLatency(ms) 4 5
          INSERT Operations 4999567 500780
          INSERT AverageLatency(us) 14.38 15.61
          INSERT MinLatency(us) 4 4
          INSERT MaxLatency(us) 492058 482944
          INSERT 95thPercentileLatency(ms) 0 0
          INSERT 99thPercentileLatency(ms) 0 0


          Workload E 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 1302270 1309841
          OVERALL Throughput(ops/sec) 7823 7777
          INSERT Operations 499441 499751
          INSERT AverageLatency(us) 18.06 15.86
          INSERT MinLatency(us) 5 5
          INSERT MaxLatency(us) 544490 576905
          INSERT 95thPercentileLatency(ms) 0 0
          INSERT 99thPercentileLatency(ms) 0 0
          SCAN Operations 9500559 9500248
          SCAN AverageLatency(us) 21637.25 21770.74
          SCAN MinLatency(us) 750 770
          SCAN MaxLatency(us) 3134120 3399749
          SCAN 95thPercentileLatency(ms) 124 123
          SCAN 99thPercentileLatency(ms) 171 174


          Workload F 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297
          OVERALL RunTime(ms) 95554 94002
          OVERALL Throughput(ops/sec) 92446 92670
          READ Operations 9333650 9333350
          READ AverageLatency(us) 701.67 716.06
          READ MinLatency(us) 264 272
          READ MaxLatency(us) 773925 745086
          READ 95thPercentileLatency(ms) 1 1
          READ 99thPercentileLatency(ms) 8 7
          READ-MODIFY-WRITE Operations 4666526 4667535
          READ-MODIFY-WRITE AverageLatency(us) 706.91 719.93
          READ-MODIFY-WRITE MinLatency(us) 268 276
          READ-MODIFY-WRITE MaxLatency(us) 773996 738262
          READ-MODIFY-WRITE 95thPercentileLatency(ms) 1 1
          READ-MODIFY-WRITE 99thPercentileLatency(ms) 8 7
          UPDATE Operations 4666675 4667684
          UPDATE AverageLatency(us) 23.40 17.86
          UPDATE MinLatency(us) 0 1
          UPDATE MaxLatency(us) 1264590 1261227
          UPDATE 95thPercentileLatency(ms) 0 0
          UPDATE 99thPercentileLatency(ms) 0 0
          Show
          Andrew Purtell added a comment - - edited I measure a small trade of latency for throughput on most workloads. I piggybacked this on something else so the test was run on 8u20 b17 . I would expect very similar results with 7u60. Workload A 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 100996 100738 OVERALL Throughput(ops/sec) 99013 99267 READ Operations 4999690 5000163 READ AverageLatency(us) 651.6789 651.97 READ MinLatency(us) 255 260 READ MaxLatency(us) 655882 651083 READ 95thPercentileLatency(ms) 0 0 READ 99thPercentileLatency(ms) 4 5 UPDATE Operations 5000470 4999996 UPDATE AverageLatency(us) 32.80 27.25 UPDATE MinLatency(us) 0 0 UPDATE MaxLatency(us) 652182 687377 UPDATE 95thPercentileLatency(ms) 0 0 UPDATE 99thPercentileLatency(ms) 0 0 Workload B 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 100536 100480 OVERALL Throughput(ops/sec) 99466 99521 READ Operations 9499944 9499498 READ AverageLatency(us) 599.54 653.32 READ MinLatency(us) 259 271 READ MaxLatency(us) 710357 727001 READ 95thPercentileLatency(ms) 0 1 READ 99thPercentileLatency(ms) 3 4 UPDATE Operations 500216 500661 UPDATE AverageLatency(us) 119.18 145.18 UPDATE MinLatency(us) 0 0 UPDATE MaxLatency(us) 664203 666895 UPDATE 95thPercentileLatency(ms) 0 0 UPDATE 99thPercentileLatency(ms) 0 0 Workload C 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 100021 100021 OVERALL Throughput(ops/sec) 99979 99979 READ Operations 1000000 1000000 READ AverageLatency(us) 543.22 550.31 READ MinLatency(us) 256 259 READ MaxLatency(us) 735497 721836 READ 95thPercentileLatency(ms) 0 0 READ 99thPercentileLatency(ms) 4 4 Workload D 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 103916 103250 OVERALL Throughput(ops/sec) 96237 96854 READ Operations 9500043 9499220 READ AverageLatency(us) 662.044 674.25 READ MinLatency(us) 262 261 READ MaxLatency(us) 1054555 742158 READ 95thPercentileLatency(ms) 1 0 READ 99thPercentileLatency(ms) 4 5 INSERT Operations 4999567 500780 INSERT AverageLatency(us) 14.38 15.61 INSERT MinLatency(us) 4 4 INSERT MaxLatency(us) 492058 482944 INSERT 95thPercentileLatency(ms) 0 0 INSERT 99thPercentileLatency(ms) 0 0 Workload E 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 1302270 1309841 OVERALL Throughput(ops/sec) 7823 7777 INSERT Operations 499441 499751 INSERT AverageLatency(us) 18.06 15.86 INSERT MinLatency(us) 5 5 INSERT MaxLatency(us) 544490 576905 INSERT 95thPercentileLatency(ms) 0 0 INSERT 99thPercentileLatency(ms) 0 0 SCAN Operations 9500559 9500248 SCAN AverageLatency(us) 21637.25 21770.74 SCAN MinLatency(us) 750 770 SCAN MaxLatency(us) 3134120 3399749 SCAN 95thPercentileLatency(ms) 124 123 SCAN 99thPercentileLatency(ms) 171 174 Workload F 0.98.4-SNAPSHOT 0.98.4-SNAPSHOT+11297 OVERALL RunTime(ms) 95554 94002 OVERALL Throughput(ops/sec) 92446 92670 READ Operations 9333650 9333350 READ AverageLatency(us) 701.67 716.06 READ MinLatency(us) 264 272 READ MaxLatency(us) 773925 745086 READ 95thPercentileLatency(ms) 1 1 READ 99thPercentileLatency(ms) 8 7 READ-MODIFY-WRITE Operations 4666526 4667535 READ-MODIFY-WRITE AverageLatency(us) 706.91 719.93 READ-MODIFY-WRITE MinLatency(us) 268 276 READ-MODIFY-WRITE MaxLatency(us) 773996 738262 READ-MODIFY-WRITE 95thPercentileLatency(ms) 1 1 READ-MODIFY-WRITE 99thPercentileLatency(ms) 8 7 UPDATE Operations 4666675 4667684 UPDATE AverageLatency(us) 23.40 17.86 UPDATE MinLatency(us) 0 1 UPDATE MaxLatency(us) 1264590 1261227 UPDATE 95thPercentileLatency(ms) 0 0 UPDATE 99thPercentileLatency(ms) 0 0
          Hide
          Nicolas Liochon added a comment -

          Thanks.
          In theory, this patch impacts A, B & D, as YCSB use a single connection per client: so it's only the multiput that does have multiple threads on the same connections.

          My results (on a different test, on master, and with the v3) are a little bit better. I think the code with this patch is slightly better, so I would like to commit it. Are you guys ok? Stack any opinion?

          Show
          Nicolas Liochon added a comment - Thanks. In theory, this patch impacts A, B & D, as YCSB use a single connection per client: so it's only the multiput that does have multiple threads on the same connections. My results (on a different test, on master, and with the v3) are a little bit better. I think the code with this patch is slightly better, so I would like to commit it. Are you guys ok? Stack any opinion?
          Hide
          stack added a comment -

          Very nice numbers there Andrew Purtell

          I am good w/ the tradeoff. I like the patch.

          Show
          stack added a comment - Very nice numbers there Andrew Purtell I am good w/ the tradeoff. I like the patch.
          Hide
          Nicolas Liochon added a comment -

          Committed to master. It includes HBASE-11298, already reviewed. I can revert if you want more time to review it.
          For 0.98, there are 3 options:
          1) commit like it is right now (then it will be in 0.98.4)
          2) commit but remove the log category change (that was the point in HBASE-11298).
          3) commit just after to 0.98.4 to have more time it it breaks anything.

          As it has been tested, I'm good w/ all options, but I let you decide, Andrew Purtell.
          The advantage for 1 or 3 is that the code remain the same on master & 0.98, and so it will be easier to backport other patchs later...

          Show
          Nicolas Liochon added a comment - Committed to master. It includes HBASE-11298 , already reviewed. I can revert if you want more time to review it. For 0.98, there are 3 options: 1) commit like it is right now (then it will be in 0.98.4) 2) commit but remove the log category change (that was the point in HBASE-11298 ). 3) commit just after to 0.98.4 to have more time it it breaks anything. As it has been tested, I'm good w/ all options, but I let you decide, Andrew Purtell . The advantage for 1 or 3 is that the code remain the same on master & 0.98, and so it will be easier to backport other patchs later...
          Hide
          Hudson added a comment -

          FAILURE: Integrated in HBase-TRUNK #5233 (See https://builds.apache.org/job/HBase-TRUNK/5233/)
          HBASE-11297 Remove some synchros in the rpcServer responder (nkeywal: rev 07a771866f18e8ec532c14f624fa908815bd88c7)

          • hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
          • hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java
          Show
          Hudson added a comment - FAILURE: Integrated in HBase-TRUNK #5233 (See https://builds.apache.org/job/HBase-TRUNK/5233/ ) HBASE-11297 Remove some synchros in the rpcServer responder (nkeywal: rev 07a771866f18e8ec532c14f624fa908815bd88c7) hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/CallRunner.java
          Hide
          Matteo Bertozzi added a comment -

          This patch is using ConcurrentLinkedDeque, which is java7 only.
          Is trunk supposed to be compiled only with jdk >= 7?
          (same question for 0.98, if we backport it)

          Show
          Matteo Bertozzi added a comment - This patch is using ConcurrentLinkedDeque, which is java7 only. Is trunk supposed to be compiled only with jdk >= 7? (same question for 0.98, if we backport it)
          Hide
          Nicolas Liochon added a comment -

          Sorry. I can amend if necessary. I think that we should build the trunk with 1.7 only, but I didn't want to force it here, and I don't remember if we concluded anything when we discussed this previously.

          For 0.98 I suppose that we can't change this now...

          Show
          Nicolas Liochon added a comment - Sorry. I can amend if necessary. I think that we should build the trunk with 1.7 only, but I didn't want to force it here, and I don't remember if we concluded anything when we discussed this previously. For 0.98 I suppose that we can't change this now...
          Hide
          Andrew Purtell added a comment -

          Although ConcurrentLinkedDeque does not use any Unsafe methods not available in the 6 runtime, we cannot clone the source from the JDK into our distribution for JRE < 7 because OpenJDK is GPL 2.

          Changing the patch to use another data structure invalidates the performance analysis done on this issue.

          Therefore sadly let's not include this in 0.98. Supporting JDK >= 7 only for trunk sounds reasonable to me. We should discuss it on dev@. Depending on the outcome of that discussion amending trunk may not be necessary.

          Show
          Andrew Purtell added a comment - Although ConcurrentLinkedDeque does not use any Unsafe methods not available in the 6 runtime, we cannot clone the source from the JDK into our distribution for JRE < 7 because OpenJDK is GPL 2. Changing the patch to use another data structure invalidates the performance analysis done on this issue. Therefore sadly let's not include this in 0.98. Supporting JDK >= 7 only for trunk sounds reasonable to me. We should discuss it on dev@. Depending on the outcome of that discussion amending trunk may not be necessary.
          Hide
          Nicolas Liochon added a comment -

          We should discuss it on dev@.

          Done. Depending on how fast it goes I will revert this patch...

          Show
          Nicolas Liochon added a comment - We should discuss it on dev@. Done. Depending on how fast it goes I will revert this patch...
          Hide
          Nicolas Liochon added a comment -

          Changing the patch to use another data structure invalidates the performance analysis done on this issue.

          Yeah, you're right. Let's keep it for 1.0 only then. Thanks a lot for you time, Andrew.

          Show
          Nicolas Liochon added a comment - Changing the patch to use another data structure invalidates the performance analysis done on this issue. Yeah, you're right. Let's keep it for 1.0 only then. Thanks a lot for you time, Andrew.
          Hide
          Nicolas Liochon added a comment -

          This was committed a while ago...

          Show
          Nicolas Liochon added a comment - This was committed a while ago...
          Hide
          Enis Soztutar added a comment -

          Closing this issue after 0.99.0 release.

          Show
          Enis Soztutar added a comment - Closing this issue after 0.99.0 release.

            People

            • Assignee:
              Nicolas Liochon
              Reporter:
              Nicolas Liochon
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development