Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13863

FsDatasetImpl should log DiskOutOfSpaceException

    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.1.0, 2.9.1, 3.0.3
    • Fix Version/s: 3.2.0, 3.0.4, 3.1.2
    • Component/s: hdfs
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      The code in function createRbw as follow

              try {
                // First try to place the block on a transient volume.
                ref = volumes.getNextTransientVolume(b.getNumBytes());
                datanode.getMetrics().incrRamDiskBlocksWrite();
              } catch (DiskOutOfSpaceException de) {
                // Ignore the exception since we just fall back to persistent storage.
              } finally {
                if (ref == null) {
                  cacheManager.release(b.getNumBytes());
                }
              }
      

      I think we should log the exception because it took me long time to resolve problems, and maybe others face the same problems.
      When i test ram_disk, i found no data was written into randomdisk. I debug, deep into the source code, and found that randomdisk size was less than reserved space. I think if message was logged, i would resolve the problem quickly.

        Attachments

        1. HDFS-13863.001.patch
          0.9 kB
          Fei Hui
        2. HDFS-13863.002.patch
          1.0 kB
          Fei Hui
        3. HDFS-13863.003.patch
          1 kB
          Fei Hui

          Activity

            People

            • Assignee:
              ferhui Fei Hui
              Reporter:
              ferhui Fei Hui
            • Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: