Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-646

name node server does not load large (> 2^31 bytes) edits file

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • 0.8.0
    • 0.9.0
    • None
    • None

    Description

      FileInputStream.available() returns negative values when reading a large file (> 2^31 bytes) – this is a known (unresolved) java bug:
      http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6402006

      Consequence: a large edits file is not loaded and deleted without any warnings. The system reverts back to the old fsimage.

      This happens in jdk1.6 as well, i.e. the bug has not yet been fixed.

      In addition, when finally I was able to load my big cron-backed-up edits file (6.5 GB) with a kludgy work-around, the blocks did not exist anymore in the data node servers, probably deleted from the previous attempts when the name node server did not know about the changed situation.

      Moral till this is fixed or worked-around: don't wait too long to restart the name node server. Otherwise this is a way to lose the entire dfs.

      Attachments

        1. edits.patch
          1 kB
          Milind Barve

        Activity

          People

            milindb Milind Barve
            ckunz Christian Kunz
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: