Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11901

BytesWritable fails to support 2G chunks due to integer overflow

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.8.0, 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). This is an unsafe operation since it restricts the max size to ~700MB, since 700MB * 3 > 2GB.

      I didn't write a test case for this case because in order to trigger this, I'd need to allocate around 700MB, which is pretty expensive to do in a unit test. Note that I didn't throw any exception in the case integer overflow as I didn't want to change that behavior (callers to this might expect a java.lang.NegativeArraySizeException).

        Attachments

        1. HADOOP-11901 (3).diff
          0.8 kB
          Reynold Xin
        2. HADOOP-11901.diff
          0.7 kB
          Reynold Xin

          Issue Links

            Activity

              People

              • Assignee:
                rxin Reynold Xin
                Reporter:
                rxin Reynold Xin
              • Votes:
                0 Vote for this issue
                Watchers:
                10 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: