Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-6151

HDFS should refuse to cache blocks >=2GB

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 2.4.0
    • Fix Version/s: None
    • Component/s: caching, datanode
    • Labels:
      None

      Description

      If you try to cache a block that's >=2GB, the DN will silently fail to cache it since MappedByteBuffer uses a signed int to represent size. Blocks this large are rare, but we should log or alert the user somehow.

        Attachments

          Activity

            People

            • Assignee:
              andrew.wang Andrew Wang
              Reporter:
              andrew.wang Andrew Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated: