Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3440

should more effectively limit stream memory consumption when reading corrupt edit logs

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • None
    • 2.0.2-alpha
    • None
    • None
    • Reviewed

    Description

      Currently, we do in.mark(100MB) before reading an opcode out of the edit log. However, this could result in us usin all of those 100 MB when reading bogus data, which is not what we want. It also could easily make some corrupt edit log files unreadable.

      We should have a stream limiter interface, that causes a clean IOException when we're in this situation, and does not result in huge memory consumption.

      Attachments

        1. HDFS-3440.001.patch
          10 kB
          Colin McCabe
        2. HDFS-3440.002.patch
          12 kB
          Colin McCabe

        Issue Links

          Activity

            People

              cmccabe Colin McCabe
              cmccabe Colin McCabe
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: