Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-12325

SFTPFileSystem operations should restore cwd

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • None
    • 2.9.0, 3.0.0-beta1
    • None
    • None
    • Reviewed

    Description

      We've seen a case where writing to SFTPFileSystem led to unexpected behaviour:

      Given a directory ./data with more than one files in it, the steps it took to get this error was simply:

      hdfs dfs -fs sftp://x.y.z -mkdir dir0
      hdfs dfs -fs sftp://x.y.z -copyFromLocal data dir0
      hdfs dfs -fs sftp://x.y.z -ls -R dir0
      

      But not all files show up as in the ls output, in fact more often just one single file shows up in that path...

      Digging deeper, we found that rename, mkdirs and create operations in SFTPFileSystem are changing the current working directory during it's execution. For example in create there are:

            client.cd(parent.toUri().getPath());
            os = client.put(f.getName());
      

      The issue here is SFTPConnectionPool is caching SFTP sessions (in idleConnections), which contains their current working directory. So after these operations, the sessions will be put back to cache with a changed working directory. This accumulates in each call and ends up causing unexpected weird behaviour. Basically this error happens when processing multiple file system objects in one operation, and relative path is being used.

      The fix here is to restore the current working directory of the SFTP sessions.

      Attachments

        1. HDFS-12325.001.patch
          2 kB
          Chen Liang

        Activity

          People

            vagarychen Chen Liang
            nmaheshwari Namit Maheshwari
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: