Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-10813

Define general filesystem exceptions (usable by any HCFS)

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 2.2.0
    • None
    • fs

    Description

      While Hadoop defines filesystem API which makes possible to use different
      filesystem implementation than HDFS (aka HCFS), we are missing HCFS
      exceptions for some failures wrt to namenode federation.

      For namenode federation, one can specify different namenode like this:
      hdfs://namenode_hostname/some/path. So when the given namenode doesn't
      exist, UnknownHostException is thrown:

      $ hadoop fs -mkdir -p hdfs://bugcheck/foo/bar
      -mkdir: java.net.UnknownHostException: bugcheck
      Usage: hadoop fs [generic options] -mkdir [-p] <path> ...
      

      Which is ok for HDFS, but there are other hadoop filesystem with different
      implementation and raising UnknownHostException doesn't make sense for
      them. For example the following path: glusterfs://bugcheck/foo/bar points
      to file /foo/bar on GlusterFS volume named bugcheck. That said, the
      meaning is the same compared to HDFS, both namenode hostname and glusterfs
      volume specifies different filesystem tree available for Hadoop.

      Would it make sense to define general HCFS exception which would wrap such
      cases so that it would be possible to fail in the same way when given
      filesystem tree is not available/defined, not matter which hadoop filesystem
      is used?

      Attachments

        Activity

          People

            Unassigned Unassigned
            mbukatov Martin Bukatovic
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: