Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11466

FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there

    Details

    • Type: Improvement
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.1, 3.0.0-alpha1
    • Component/s: io, performance, util
    • Labels:
    • Environment:

      Linux X86 and Solaris SPARC

    • Target Version/s:

      Description

      One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two byte arrays at coarser 8-byte granularity instead of at the byte-level. The discussion at HADOOP-7761 says this fast byte comparison is somewhat faster for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the patch uses Unsafe.getLong. The problem is that this call is incredibly expensive on SPARC. The reason is that the Studio compiler detects an unaligned pointer read and handles this read in software. x86 supports unaligned reads, so there is no penalty for this call on x86.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                sumansomasundar Suman Somasundar
                Reporter:
                sumansomasundar Suman Somasundar
              • Votes:
                0 Vote for this issue
                Watchers:
                9 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: