Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11901

BytesWritable fails to support 2G chunks due to integer overflow

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.8.0, 3.0.0-alpha1
    • None
    • None
    • Reviewed

    Description

      BytesWritable.setSize increases the buffer size by 1.5 each time ( * 3 / 2). This is an unsafe operation since it restricts the max size to ~700MB, since 700MB * 3 > 2GB.

      I didn't write a test case for this case because in order to trigger this, I'd need to allocate around 700MB, which is pretty expensive to do in a unit test. Note that I didn't throw any exception in the case integer overflow as I didn't want to change that behavior (callers to this might expect a java.lang.NegativeArraySizeException).

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            rxin Reynold Xin Assign to me
            rxin Reynold Xin
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment