Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-4860 Sort performance
  3. FLINK-3722

The divisions in the InMemorySorters' swap/compare methods hurt performance

    Details

    • Type: Sub-task
    • Status: Closed
    • Priority: Minor
    • Resolution: Implemented
    • Affects Version/s: None
    • Fix Version/s: 1.3.0
    • Component/s: Local Runtime
    • Labels:

      Description

      NormalizedKeySorter's and FixedLengthRecordSorter's swap and compare methods use divisions (which take a lot of time [1]) to calculate the index of the MemorySegment and the offset inside the segment. Greg Hogan reported on the mailing list [2] measuring a ~12-14% performance effect in one case.

      A possibility to improve the situation is the following:
      The way that QuickSort mostly uses these compare and swap methods is that it maintains two indices, and uses them to call compare and swap. The key observation is that these indices are mostly stepped by one, and incrementally calculating the quotient and modulo is actually easy when the index changes only by one: increment/decrement the modulo, and check whether the modulo has reached 0 or the divisor, and if it did, then wrap-around the modulo and increment/decrement the quotient.

      To implement this, InMemorySorter would have to expose an iterator that would have the divisor and the current modulo and quotient as state, and have increment/decrement methods. Compare and swap could then have overloads that take these iterators as arguments.

      [1] http://www.agner.org/optimize/instruction_tables.pdf
      [2] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Macro-benchmarking-for-performance-tuning-and-regression-detection-td11078.html

        Issue Links

          Activity

          Hide
          StephanEwen Stephan Ewen added a comment -

          Good observation.

          How about implementing the QuickSort directly on the InMemorySorter - then the offsets can be computed by adding/subtracting the key/pointer length.
          This could still fall back to a heap sort, if the quicksort degenerates.

          Show
          StephanEwen Stephan Ewen added a comment - Good observation. How about implementing the QuickSort directly on the InMemorySorter - then the offsets can be computed by adding/subtracting the key/pointer length. This could still fall back to a heap sort, if the quicksort degenerates.
          Hide
          githubbot ASF GitHub Bot added a comment -

          GitHub user greghogan opened a pull request:

          https://github.com/apache/flink/pull/2628

          FLINK-3722 [runtime] Don't / and % when sorting

          Replace division and modulus with addition and subtraction.

          The timing chart below has three columns for two Gelly algorithms with increasing scale. The first is timings for the master branch. The second column replaces division and modulus with shift-and-bitmask by storing each element in a power-of-2 chunk of memory. The third column is for this PR which modifies the sort algorithm to track the buffer number and offset for each index. This code has the advantage that there is no wasted memory.

          In this comparison, for both int and long, we look to be already operating on power-of-2 size elements. I would expect this PR to perform relatively better with the FixedLengthRecordSorter instrumented from FLINK-4705 where shift-and-bitmask would be padding elements.

          This initial PR does not modify `HeapSort`. I wanted to first get acceptance for the current set of modifications. There are further optimizations which can be benchmarked in follow-on tickets.

          ```
          HITS, scale=16, INT : runtime= 1.618 (8) 1.524 (8) ( -5.82%) 1.546 (8) ( -4.47%)
          HITS, scale=16, LONG : runtime= 1.579 (8) 1.552 (8) ( -1.76%) 1.541 (8) ( -2.43%)
          HITS, scale=16, STRING: runtime= 1.774 (8) 1.739 (8) ( -1.99%) 1.701 (8) ( -4.11%)

          HITS, scale=17, INT : runtime= 2.032 (8) 1.950 (8) ( -4.06%) 1.937 (8) ( -4.71%)
          HITS, scale=17, LONG : runtime= 1.986 (8) 1.923 (8) ( -3.15%) 1.926 (8) ( -3.01%)
          HITS, scale=17, STRING: runtime= 2.377 (8) 2.275 (8) ( -4.30%) 2.289 (8) ( -3.68%)

          HITS, scale=18, INT : runtime= 2.894 (8) 2.761 (8) ( -4.59%) 2.762 (8) ( -4.54%)
          HITS, scale=18, LONG : runtime= 2.863 (8) 2.754 (8) ( -3.81%) 2.735 (8) ( -4.48%)
          HITS, scale=18, STRING: runtime= 3.683 (8) 3.455 (8) ( -6.18%) 3.461 (8) ( -6.02%)

          HITS, scale=19, INT : runtime= 4.914 (8) 4.631 (8) ( -5.76%) 4.595 (8) ( -6.49%)
          HITS, scale=19, LONG : runtime= 4.792 (8) 4.578 (8) ( -4.45%) 4.538 (8) ( -5.28%)
          HITS, scale=19, STRING: runtime= 6.484 (8) 6.149 (8) ( -5.18%) 6.118 (8) ( -5.64%)

          HITS, scale=20, INT : runtime= 8.662 (8) 8.086 (8) ( -6.64%) 8.091 (8) ( -6.59%)
          HITS, scale=20, LONG : runtime= 8.407 (8) 7.993 (8) ( -4.93%) 7.918 (8) ( -5.82%)
          HITS, scale=20, STRING: runtime= 11.946 (8) 11.327 (8) ( -5.18%) 11.272 (8) ( -5.64%)

          HITS, scale=21, INT : runtime= 16.248 (8) 15.137 (8) ( -6.84%) 15.207 (8) ( -6.41%)
          HITS, scale=21, LONG : runtime= 15.907 (8) 14.750 (8) ( -7.28%) 14.724 (8) ( -7.44%)
          HITS, scale=21, STRING: runtime= 23.634 (8) 22.596 (8) ( -4.39%) 22.546 (8) ( -4.61%)

          HITS, scale=22, INT : runtime= 31.964 (8) 29.954 (8) ( -6.29%) 29.808 (8) ( -6.74%)
          HITS, scale=22, LONG : runtime= 31.244 (8) 29.241 (8) ( -6.41%) 28.829 (8) ( -7.73%)
          HITS, scale=22, STRING: runtime= 47.870 (6) 45.942 (6) ( -4.03%) 45.987 (8) ( -3.93%)

          HITS, scale=23, INT : runtime= 65.146 (4) 60.245 (4) ( -7.52%) 60.008 (4) ( -7.89%)
          HITS, scale=23, LONG : runtime= 64.770 (4) 59.828 (4) ( -7.63%) 59.334 (4) ( -8.39%)
          HITS, scale=23, STRING: runtime= 103.690 (2) 99.335 (2) ( -4.20%) 98.919 (3) ( -4.60%)

          HITS, scale=24, INT : runtime= 156.567 (2) 138.923 (2) (-11.27%) 141.185 (2) ( -9.82%)
          HITS, scale=24, LONG : runtime= 141.904 (2) 132.292 (2) ( -6.77%) 129.677 (2) ( -8.62%)
          HITS, scale=24, STRING: runtime= 221.081 (1) 217.128 (1) ( -1.79%) 213.822 (1) ( -3.28%)

          HITS, scale=25, INT : runtime= 280.720 (1)
          HITS, scale=25, LONG : runtime= 321.418 (1) 297.764 (1) ( -7.36%) 293.982 (1) ( -8.54%)

          JaccardIndex, scale=12, INT : runtime= 0.413 (8) 0.424 (8) ( 2.57%) 0.408 (8) ( -1.30%)
          JaccardIndex, scale=12, LONG : runtime= 0.440 (8) 0.414 (8) ( -5.85%) 0.424 (8) ( -3.64%)
          JaccardIndex, scale=12, STRING: runtime= 0.667 (8) 0.647 (8) ( -2.91%) 0.644 (8) ( -3.38%)

          JaccardIndex, scale=13, INT : runtime= 0.811 (8) 0.743 (8) ( -8.45%) 0.764 (8) ( -5.75%)
          JaccardIndex, scale=13, LONG : runtime= 0.885 (8) 0.822 (8) ( -7.12%) 0.805 (8) ( -9.01%)
          JaccardIndex, scale=13, STRING: runtime= 1.599 (8) 1.489 (8) ( -6.86%) 1.486 (8) ( -7.04%)

          JaccardIndex, scale=14, INT : runtime= 1.809 (8) 1.621 (8) (-10.43%) 1.649 (8) ( -8.87%)
          JaccardIndex, scale=14, LONG : runtime= 2.003 (8) 1.834 (8) ( -8.43%) 1.800 (8) (-10.13%)
          JaccardIndex, scale=14, STRING: runtime= 3.855 (8) 3.666 (8) ( -4.91%) 3.667 (8) ( -4.86%)

          JaccardIndex, scale=15, INT : runtime= 4.693 (8) 4.158 (8) (-11.40%) 4.148 (8) (-11.62%)
          JaccardIndex, scale=15, LONG : runtime= 5.205 (8) 4.936 (8) ( -5.17%) 4.712 (8) ( -9.48%)
          JaccardIndex, scale=15, STRING: runtime= 10.882 (8) 10.429 (8) ( -4.16%) 10.529 (8) ( -3.24%)

          JaccardIndex, scale=16, INT : runtime= 13.504 (8) 11.852 (8) (-12.24%) 12.124 (8) (-10.22%)
          JaccardIndex, scale=16, LONG : runtime= 14.933 (8) 13.482 (8) ( -9.72%) 13.279 (8) (-11.08%)
          JaccardIndex, scale=16, STRING: runtime= 31.802 (8) 30.549 (8) ( -3.94%) 30.910 (8) ( -2.80%)

          JaccardIndex, scale=17, INT : runtime= 36.742 (8) 32.542 (8) (-11.43%) 33.193 (8) ( -9.66%)
          JaccardIndex, scale=17, LONG : runtime= 40.648 (6) 37.355 (6) ( -8.10%) 36.786 (8) ( -9.50%)
          JaccardIndex, scale=17, STRING: runtime= 89.418 (4) 84.632 (4) ( -5.35%) 86.596 (4) ( -3.16%)

          JaccardIndex, scale=18, INT : runtime= 102.126 (4) 91.521 (4) (-10.38%) 90.900 (4) (-10.99%)
          JaccardIndex, scale=18, LONG : runtime= 121.389 (3) 113.324 (3) ( -6.64%) 112.323 (4) ( -7.47%)
          JaccardIndex, scale=18, STRING: runtime= 251.373 (2) 240.014 (2) ( -4.52%) 243.849 (2) ( -2.99%)

          JaccardIndex, scale=19, INT : runtime= 286.927 (1) 254.767 (1) (-11.21%) 260.911 (2) ( -9.07%)
          JaccardIndex, scale=19, LONG : runtime= 331.742 (1) 303.246 (1) ( -8.59%) 301.764 (2) ( -9.04%)
          ```

          You can merge this pull request into a Git repository by running:

          $ git pull https://github.com/greghogan/flink 3722_the_divisions_in_the_inmemorysorters_swapcompare_methods_hurt_performance

          Alternatively you can review and apply these changes as the patch at:

          https://github.com/apache/flink/pull/2628.patch

          To close this pull request, make a commit to your master/trunk branch
          with (at least) the following in the commit message:

          This closes #2628


          commit 3283ef6e994ff831f1bb04d075a32995de8596c4
          Author: Greg Hogan <code@greghogan.com>
          Date: 2016-10-05T20:13:02Z

          FLINK-3722 [runtime] Don't / and % when sorting

          Replace division and modulus with addition and subtraction.


          Show
          githubbot ASF GitHub Bot added a comment - GitHub user greghogan opened a pull request: https://github.com/apache/flink/pull/2628 FLINK-3722 [runtime] Don't / and % when sorting Replace division and modulus with addition and subtraction. The timing chart below has three columns for two Gelly algorithms with increasing scale. The first is timings for the master branch. The second column replaces division and modulus with shift-and-bitmask by storing each element in a power-of-2 chunk of memory. The third column is for this PR which modifies the sort algorithm to track the buffer number and offset for each index. This code has the advantage that there is no wasted memory. In this comparison, for both int and long, we look to be already operating on power-of-2 size elements. I would expect this PR to perform relatively better with the FixedLengthRecordSorter instrumented from FLINK-4705 where shift-and-bitmask would be padding elements. This initial PR does not modify `HeapSort`. I wanted to first get acceptance for the current set of modifications. There are further optimizations which can be benchmarked in follow-on tickets. ``` HITS, scale=16, INT : runtime= 1.618 (8) 1.524 (8) ( -5.82%) 1.546 (8) ( -4.47%) HITS, scale=16, LONG : runtime= 1.579 (8) 1.552 (8) ( -1.76%) 1.541 (8) ( -2.43%) HITS, scale=16, STRING: runtime= 1.774 (8) 1.739 (8) ( -1.99%) 1.701 (8) ( -4.11%) HITS, scale=17, INT : runtime= 2.032 (8) 1.950 (8) ( -4.06%) 1.937 (8) ( -4.71%) HITS, scale=17, LONG : runtime= 1.986 (8) 1.923 (8) ( -3.15%) 1.926 (8) ( -3.01%) HITS, scale=17, STRING: runtime= 2.377 (8) 2.275 (8) ( -4.30%) 2.289 (8) ( -3.68%) HITS, scale=18, INT : runtime= 2.894 (8) 2.761 (8) ( -4.59%) 2.762 (8) ( -4.54%) HITS, scale=18, LONG : runtime= 2.863 (8) 2.754 (8) ( -3.81%) 2.735 (8) ( -4.48%) HITS, scale=18, STRING: runtime= 3.683 (8) 3.455 (8) ( -6.18%) 3.461 (8) ( -6.02%) HITS, scale=19, INT : runtime= 4.914 (8) 4.631 (8) ( -5.76%) 4.595 (8) ( -6.49%) HITS, scale=19, LONG : runtime= 4.792 (8) 4.578 (8) ( -4.45%) 4.538 (8) ( -5.28%) HITS, scale=19, STRING: runtime= 6.484 (8) 6.149 (8) ( -5.18%) 6.118 (8) ( -5.64%) HITS, scale=20, INT : runtime= 8.662 (8) 8.086 (8) ( -6.64%) 8.091 (8) ( -6.59%) HITS, scale=20, LONG : runtime= 8.407 (8) 7.993 (8) ( -4.93%) 7.918 (8) ( -5.82%) HITS, scale=20, STRING: runtime= 11.946 (8) 11.327 (8) ( -5.18%) 11.272 (8) ( -5.64%) HITS, scale=21, INT : runtime= 16.248 (8) 15.137 (8) ( -6.84%) 15.207 (8) ( -6.41%) HITS, scale=21, LONG : runtime= 15.907 (8) 14.750 (8) ( -7.28%) 14.724 (8) ( -7.44%) HITS, scale=21, STRING: runtime= 23.634 (8) 22.596 (8) ( -4.39%) 22.546 (8) ( -4.61%) HITS, scale=22, INT : runtime= 31.964 (8) 29.954 (8) ( -6.29%) 29.808 (8) ( -6.74%) HITS, scale=22, LONG : runtime= 31.244 (8) 29.241 (8) ( -6.41%) 28.829 (8) ( -7.73%) HITS, scale=22, STRING: runtime= 47.870 (6) 45.942 (6) ( -4.03%) 45.987 (8) ( -3.93%) HITS, scale=23, INT : runtime= 65.146 (4) 60.245 (4) ( -7.52%) 60.008 (4) ( -7.89%) HITS, scale=23, LONG : runtime= 64.770 (4) 59.828 (4) ( -7.63%) 59.334 (4) ( -8.39%) HITS, scale=23, STRING: runtime= 103.690 (2) 99.335 (2) ( -4.20%) 98.919 (3) ( -4.60%) HITS, scale=24, INT : runtime= 156.567 (2) 138.923 (2) (-11.27%) 141.185 (2) ( -9.82%) HITS, scale=24, LONG : runtime= 141.904 (2) 132.292 (2) ( -6.77%) 129.677 (2) ( -8.62%) HITS, scale=24, STRING: runtime= 221.081 (1) 217.128 (1) ( -1.79%) 213.822 (1) ( -3.28%) HITS, scale=25, INT : runtime= 280.720 (1) HITS, scale=25, LONG : runtime= 321.418 (1) 297.764 (1) ( -7.36%) 293.982 (1) ( -8.54%) JaccardIndex, scale=12, INT : runtime= 0.413 (8) 0.424 (8) ( 2.57%) 0.408 (8) ( -1.30%) JaccardIndex, scale=12, LONG : runtime= 0.440 (8) 0.414 (8) ( -5.85%) 0.424 (8) ( -3.64%) JaccardIndex, scale=12, STRING: runtime= 0.667 (8) 0.647 (8) ( -2.91%) 0.644 (8) ( -3.38%) JaccardIndex, scale=13, INT : runtime= 0.811 (8) 0.743 (8) ( -8.45%) 0.764 (8) ( -5.75%) JaccardIndex, scale=13, LONG : runtime= 0.885 (8) 0.822 (8) ( -7.12%) 0.805 (8) ( -9.01%) JaccardIndex, scale=13, STRING: runtime= 1.599 (8) 1.489 (8) ( -6.86%) 1.486 (8) ( -7.04%) JaccardIndex, scale=14, INT : runtime= 1.809 (8) 1.621 (8) (-10.43%) 1.649 (8) ( -8.87%) JaccardIndex, scale=14, LONG : runtime= 2.003 (8) 1.834 (8) ( -8.43%) 1.800 (8) (-10.13%) JaccardIndex, scale=14, STRING: runtime= 3.855 (8) 3.666 (8) ( -4.91%) 3.667 (8) ( -4.86%) JaccardIndex, scale=15, INT : runtime= 4.693 (8) 4.158 (8) (-11.40%) 4.148 (8) (-11.62%) JaccardIndex, scale=15, LONG : runtime= 5.205 (8) 4.936 (8) ( -5.17%) 4.712 (8) ( -9.48%) JaccardIndex, scale=15, STRING: runtime= 10.882 (8) 10.429 (8) ( -4.16%) 10.529 (8) ( -3.24%) JaccardIndex, scale=16, INT : runtime= 13.504 (8) 11.852 (8) (-12.24%) 12.124 (8) (-10.22%) JaccardIndex, scale=16, LONG : runtime= 14.933 (8) 13.482 (8) ( -9.72%) 13.279 (8) (-11.08%) JaccardIndex, scale=16, STRING: runtime= 31.802 (8) 30.549 (8) ( -3.94%) 30.910 (8) ( -2.80%) JaccardIndex, scale=17, INT : runtime= 36.742 (8) 32.542 (8) (-11.43%) 33.193 (8) ( -9.66%) JaccardIndex, scale=17, LONG : runtime= 40.648 (6) 37.355 (6) ( -8.10%) 36.786 (8) ( -9.50%) JaccardIndex, scale=17, STRING: runtime= 89.418 (4) 84.632 (4) ( -5.35%) 86.596 (4) ( -3.16%) JaccardIndex, scale=18, INT : runtime= 102.126 (4) 91.521 (4) (-10.38%) 90.900 (4) (-10.99%) JaccardIndex, scale=18, LONG : runtime= 121.389 (3) 113.324 (3) ( -6.64%) 112.323 (4) ( -7.47%) JaccardIndex, scale=18, STRING: runtime= 251.373 (2) 240.014 (2) ( -4.52%) 243.849 (2) ( -2.99%) JaccardIndex, scale=19, INT : runtime= 286.927 (1) 254.767 (1) (-11.21%) 260.911 (2) ( -9.07%) JaccardIndex, scale=19, LONG : runtime= 331.742 (1) 303.246 (1) ( -8.59%) 301.764 (2) ( -9.04%) ``` You can merge this pull request into a Git repository by running: $ git pull https://github.com/greghogan/flink 3722_the_divisions_in_the_inmemorysorters_swapcompare_methods_hurt_performance Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/2628.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2628 commit 3283ef6e994ff831f1bb04d075a32995de8596c4 Author: Greg Hogan <code@greghogan.com> Date: 2016-10-05T20:13:02Z FLINK-3722 [runtime] Don't / and % when sorting Replace division and modulus with addition and subtraction.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user StephanEwen commented on the issue:

          https://github.com/apache/flink/pull/2628

          Thanks for these cool experiments. Looks like a nice improvement.
          I'll try to dig into your code in the next days...

          Show
          githubbot ASF GitHub Bot added a comment - Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/2628 Thanks for these cool experiments. Looks like a nice improvement. I'll try to dig into your code in the next days...
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ggevay commented on a diff in the pull request:

          https://github.com/apache/flink/pull/2628#discussion_r84248344

          — Diff: flink-runtime/src/main/java/org/apache/flink/runtime/operators/sort/QuickSort.java —
          @@ -49,20 +49,40 @@ protected static int getMaxDepth(int x) {

          • then switch to {@link HeapSort}

            .
            */
            public void sort(final IndexedSortable s, int p, int r)

            { - sortInternal(s, p, r, getMaxDepth(r - p)); + int recordsPerSegment = s.recordsPerSegment(); + int recordSize = s.recordSize(); + + int maxOffset = recordSize * (recordsPerSegment - 1); + + int size = s.size(); + int sizeN = size / recordsPerSegment; + int sizeO = (size % recordsPerSegment) * recordSize; + + sortInternal(s, recordsPerSegment, recordSize, maxOffset, 0, 0, 0, size, sizeN, sizeO, getMaxDepth(r - p)); }

          public void sort(IndexedSortable s)

          { sort(s, 0, s.size()); }
          • private static void sortInternal(final IndexedSortable s, int p, int r, int depth) {
            + private static void sortInternal(final IndexedSortable s, int recordsPerSegment, int recordSize, int maxOffset,
            + int p, int pN, int pO, int r, int rN, int rO, int depth) {
              • End diff –

          Could you please add a comment that explains all these parameters? (I understand them only because I know the original code and also what you are trying to achieve, but for someone who sees the code for the first time this will be quite scary.)

          Show
          githubbot ASF GitHub Bot added a comment - Github user ggevay commented on a diff in the pull request: https://github.com/apache/flink/pull/2628#discussion_r84248344 — Diff: flink-runtime/src/main/java/org/apache/flink/runtime/operators/sort/QuickSort.java — @@ -49,20 +49,40 @@ protected static int getMaxDepth(int x) { then switch to {@link HeapSort} . */ public void sort(final IndexedSortable s, int p, int r) { - sortInternal(s, p, r, getMaxDepth(r - p)); + int recordsPerSegment = s.recordsPerSegment(); + int recordSize = s.recordSize(); + + int maxOffset = recordSize * (recordsPerSegment - 1); + + int size = s.size(); + int sizeN = size / recordsPerSegment; + int sizeO = (size % recordsPerSegment) * recordSize; + + sortInternal(s, recordsPerSegment, recordSize, maxOffset, 0, 0, 0, size, sizeN, sizeO, getMaxDepth(r - p)); } public void sort(IndexedSortable s) { sort(s, 0, s.size()); } private static void sortInternal(final IndexedSortable s, int p, int r, int depth) { + private static void sortInternal(final IndexedSortable s, int recordsPerSegment, int recordSize, int maxOffset, + int p, int pN, int pO, int r, int rN, int rO, int depth) { End diff – Could you please add a comment that explains all these parameters? (I understand them only because I know the original code and also what you are trying to achieve, but for someone who sees the code for the first time this will be quite scary.)
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user greghogan commented on the issue:

          https://github.com/apache/flink/pull/2628

          Thanks @ggevay for reviewing. I added a commit with additional comments.

          I transcribed `Quicksort` so as to remove considerations of Java performance and inlining. It was not clear to me that if we encapsulated the index, page number, and page offset into an object that Java would inline the various increment and decrement functions. Also, I don't think this looks too bad. I'm happy to reformat if that is preferred.

          I think this is the best time to investigate alternative methods. I'm not seeing how one would sort on top of `InMemorySorter` without deserializing records.

          Show
          githubbot ASF GitHub Bot added a comment - Github user greghogan commented on the issue: https://github.com/apache/flink/pull/2628 Thanks @ggevay for reviewing. I added a commit with additional comments. I transcribed `Quicksort` so as to remove considerations of Java performance and inlining. It was not clear to me that if we encapsulated the index, page number, and page offset into an object that Java would inline the various increment and decrement functions. Also, I don't think this looks too bad. I'm happy to reformat if that is preferred. I think this is the best time to investigate alternative methods. I'm not seeing how one would sort on top of `InMemorySorter` without deserializing records.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user ggevay commented on the issue:

          https://github.com/apache/flink/pull/2628

          > I transcribed Quicksort so as to remove considerations of Java performance and inlining. It was not clear to me that if we encapsulated the index, page number, and page offset into an object that Java would inline the various increment and decrement functions. Also, I don't think this looks too bad. I'm happy to reformat if that is preferred.

          OK, I would say that it is OK like this, but let's see what the others will say.

          Show
          githubbot ASF GitHub Bot added a comment - Github user ggevay commented on the issue: https://github.com/apache/flink/pull/2628 > I transcribed Quicksort so as to remove considerations of Java performance and inlining. It was not clear to me that if we encapsulated the index, page number, and page offset into an object that Java would inline the various increment and decrement functions. Also, I don't think this looks too bad. I'm happy to reformat if that is preferred. OK, I would say that it is OK like this, but let's see what the others will say.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user asfgit closed the pull request at:

          https://github.com/apache/flink/pull/2628

          Show
          githubbot ASF GitHub Bot added a comment - Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/2628
          Hide
          greghogan Greg Hogan added a comment -

          Implemented in 336b95d4eedc23e5ce37d1739165157e127c65f8

          Show
          greghogan Greg Hogan added a comment - Implemented in 336b95d4eedc23e5ce37d1739165157e127c65f8
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user greghogan commented on the issue:

          https://github.com/apache/flink/pull/3511

          @heytitle please remove the old FLINK-3722 commits and rebase to master.

          Show
          githubbot ASF GitHub Bot added a comment - Github user greghogan commented on the issue: https://github.com/apache/flink/pull/3511 @heytitle please remove the old FLINK-3722 commits and rebase to master.
          Hide
          githubbot ASF GitHub Bot added a comment -

          Github user heytitle commented on the issue:

          https://github.com/apache/flink/pull/3511

          @greghogan May I ask you how to remove `FLINK-3722` commits?. Only way I can think of is `git rebase -i`, but this will rewrite history of this PR.

          Show
          githubbot ASF GitHub Bot added a comment - Github user heytitle commented on the issue: https://github.com/apache/flink/pull/3511 @greghogan May I ask you how to remove ` FLINK-3722 ` commits?. Only way I can think of is `git rebase -i`, but this will rewrite history of this PR.

            People

            • Assignee:
              greghogan Greg Hogan
              Reporter:
              ggevay Gabor Gevay
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development