Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-16829 Über-jira: S3A Hadoop 3.4 features
  3. HADOOP-16721

Add fs.s3a.rename.raises.exceptions to raise exceptions on rename failures

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Open
    • Priority: Minor
    • Resolution: Unresolved
    • Affects Version/s: 3.2.0
    • Fix Version/s: None
    • Component/s: fs/s3
    • Labels:
      None

      Description

      the classic {[rename(source, dest)}} operation returns false on certain failures, which, while somewhat consistent with the posix APIs, turns out to be useless for identifying the cause of problems. Applications tend to have code which goes

      if (!fs.rename(src, dest)) throw new IOException("rename failed");
      

      While ultimately the rename/3 call needs to be made public (HADOOP-11452) it would then need a adoption across applications. We can do this in the hadoop modules, but for Hive, Spark etc it will take along time.

      Proposed: a switch to tell S3A to stop downgrading certain failures (source is dir, dest is file, src==dest, etc) into "false". This can be turned on when trying to diagnose why things like Hive are failing.

      Production code: trivial

      • change in rename(),
      • new option
      • docs.

      Test code:

      • need to clear this option for rename contract tests
      • need to create a new FS with this set to verify the various failure modes trigger it.

       

      If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS

        Attachments

          Activity

            People

            • Assignee:
              Unassigned
              Reporter:
              stevel@apache.org Steve Loughran
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated: