Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13863

FsDatasetImpl should log DiskOutOfSpaceException

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.1.0, 2.9.1, 3.0.3
    • 3.2.0, 3.0.4, 3.1.2
    • hdfs
    • None
    • Reviewed

    Description

      The code in function createRbw as follow

              try {
                // First try to place the block on a transient volume.
                ref = volumes.getNextTransientVolume(b.getNumBytes());
                datanode.getMetrics().incrRamDiskBlocksWrite();
              } catch (DiskOutOfSpaceException de) {
                // Ignore the exception since we just fall back to persistent storage.
              } finally {
                if (ref == null) {
                  cacheManager.release(b.getNumBytes());
                }
              }
      

      I think we should log the exception because it took me long time to resolve problems, and maybe others face the same problems.
      When i test ram_disk, i found no data was written into randomdisk. I debug, deep into the source code, and found that randomdisk size was less than reserved space. I think if message was logged, i would resolve the problem quickly.

      Attachments

        1. HDFS-13863.001.patch
          0.9 kB
          Hui Fei
        2. HDFS-13863.002.patch
          1.0 kB
          Hui Fei
        3. HDFS-13863.003.patch
          1 kB
          Hui Fei

        Activity

          People

            ferhui Hui Fei
            ferhui Hui Fei
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: