Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-3440

should more effectively limit stream memory consumption when reading corrupt edit logs

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Minor Minor
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.0.2-alpha
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      Currently, we do in.mark(100MB) before reading an opcode out of the edit log. However, this could result in us usin all of those 100 MB when reading bogus data, which is not what we want. It also could easily make some corrupt edit log files unreadable.

      We should have a stream limiter interface, that causes a clean IOException when we're in this situation, and does not result in huge memory consumption.

      1. HDFS-3440.002.patch
        12 kB
        Colin Patrick McCabe
      2. HDFS-3440.001.patch
        10 kB
        Colin Patrick McCabe

        Issue Links

          Activity

          No work has yet been logged on this issue.

            People

            • Assignee:
              Colin Patrick McCabe
              Reporter:
              Colin Patrick McCabe
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development