Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-11321

copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions.

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.6.0
    • Fix Version/s: 2.7.0
    • Component/s: fs
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      In Hadoop 2, it is impossible to use copyToLocal to copy a file from HDFS to a destination on an SMB share. This is because in Hadoop 2, the copyToLocal maps to 2 underlying RawLocalFileSystem operations: create and setPermission. On an SMB share, the user may be authorized for the create but denied for the setPermission. Windows denies the WRITE_DAC right required by setPermission unless the user has Full Control permissions. Granting Full Control isn't feasible for most deployments, because it's insecure. This is a regression from Hadoop 1, where copyToLocal only did a create and didn't do a separate setPermission.

      1. HADOOP-11321.1.patch
        6 kB
        Chris Nauroth
      2. HADOOP-11321.2.patch
        6 kB
        Chris Nauroth
      3. winutils.tmp.patch
        4 kB
        Chris Nauroth
      4. HADOOP-11321.003.patch
        29 kB
        Chris Nauroth
      5. HADOOP-11321.004.patch
        29 kB
        Chris Nauroth
      6. HADOOP-11321.005.patch
        29 kB
        Chris Nauroth
      7. HADOOP-11321.006.patch
        31 kB
        Chris Nauroth
      8. HADOOP-11321.007.patch
        31 kB
        Chris Nauroth
      9. HADOOP-11321.008.patch
        31 kB
        Chris Nauroth

        Issue Links

          Activity

          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1995 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1995/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1995 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1995/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #45 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/45/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #45 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/45/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #41 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/41/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #41 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/41/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1976 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1976/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1976 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1976/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #778 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/778/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #778 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/778/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #44 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/44/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #44 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/44/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          Hide
          cnauroth Chris Nauroth added a comment -

          The test failure and Findbugs warnings are unrelated. I committed this to trunk and branch-2. Thanks very much to the code reviewers: Colin, Remus and Arpit.

          I filed HADOOP-11415 to track use of fchmod for providing a similar solution on Linux.

          Show
          cnauroth Chris Nauroth added a comment - The test failure and Findbugs warnings are unrelated. I committed this to trunk and branch-2. Thanks very much to the code reviewers: Colin, Remus and Arpit. I filed HADOOP-11415 to track use of fchmod for providing a similar solution on Linux.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6732 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6732/)
          HADOOP-11321. copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf)

          • hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
          • hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6732 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6732/ ) HADOOP-11321 . copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions. Contributed by Chris Nauroth. (cnauroth: rev e996a1bfd4f3ada6cbd9633fe68fda1e0c910bdf) hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLocalDirsHandlerService.java hadoop-common-project/hadoop-common/CHANGES.txt hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12687592/HADOOP-11321.008.patch
          against trunk revision 565d72f.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

          org.apache.hadoop.crypto.random.TestOsSecureRandom

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12687592/HADOOP-11321.008.patch against trunk revision 565d72f. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: org.apache.hadoop.crypto.random.TestOsSecureRandom Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5282//console This message is automatically generated.
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Thanks for fixing that Chris!

          +1 for the v008 patch, pending Jenkins.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Thanks for fixing that Chris! +1 for the v008 patch, pending Jenkins.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12687562/HADOOP-11321.007.patch
          against trunk revision a97a1e7.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

          org.apache.hadoop.crypto.random.TestOsSecureRandom
          org.apache.hadoop.ha.TestZKFailoverController

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12687562/HADOOP-11321.007.patch against trunk revision a97a1e7. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: org.apache.hadoop.crypto.random.TestOsSecureRandom org.apache.hadoop.ha.TestZKFailoverController Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5281//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Arpit Agarwal correctly pointed out an issue in the handling of the max length rule for CreateDirectory. If it must be possible to append an 8.3 file name, then we also need room for an additional path separator, in case the original path didn't provide it. In fact, in our case, we expect not to have a trailing path separator, because it gets removed by java.io.File#getAbsolutePath. Here is patch v008, changing to MAX_PATH - 13 instead of MAX_PATH - 12. Thanks for the good eye, Arpit.

          Show
          cnauroth Chris Nauroth added a comment - Arpit Agarwal correctly pointed out an issue in the handling of the max length rule for CreateDirectory . If it must be possible to append an 8.3 file name, then we also need room for an additional path separator, in case the original path didn't provide it. In fact, in our case, we expect not to have a trailing path separator, because it gets removed by java.io.File#getAbsolutePath . Here is patch v008, changing to MAX_PATH - 13 instead of MAX_PATH - 12 . Thanks for the good eye, Arpit.
          Hide
          cmccabe Colin P. McCabe added a comment -

          +1 from me as well. Thanks, Chris.

          Show
          cmccabe Colin P. McCabe added a comment - +1 from me as well. Thanks, Chris.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm attaching patch v007. This has one small change from v006, shown in the incremental diff below. Instead of basing ownership on the TokenUser of the current process, the new patch uses TokenOwner. According to documentation, this is the correct way to identify the default owner SID for newly created objects.

          http://msdn.microsoft.com/en-us/library/windows/desktop/aa379626%28v=vs.85%29.aspx

          More details are here:

          http://technet.microsoft.com/en-us/library/cc961992.aspx

          With this change, we correctly handle the special case that for files created by a member of the Administrators group, the ownership is set to the Administrators group instead of to the individual user SID.

          diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c b/hadoop-common-project/hadoop-c
          index c1dd914..040d563 100644
          --- a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          +++ b/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          @@ -1515,7 +1515,7 @@ static DWORD GetWindowsDACLsForCreate(__in INT mode, __out PACL *ppDACL) {
             DWORD dwRtnCode = ERROR_SUCCESS;
             HANDLE hToken = NULL;
             DWORD dwSize = 0;
          -  PTOKEN_USER pTokenUser = NULL;
          +  PTOKEN_OWNER pTokenOwner = NULL;
             PTOKEN_PRIMARY_GROUP pTokenPrimaryGroup = NULL;
             PSID pOwnerSid = NULL, pGroupSid = NULL;
             PACL pDACL = NULL;
          @@ -1525,11 +1525,11 @@ static DWORD GetWindowsDACLsForCreate(__in INT mode, __out PACL *ppDACL) {
               goto done;
             }
           
          -  dwRtnCode = GetTokenInformationByClass(hToken, TokenUser, &pTokenUser);
          +  dwRtnCode = GetTokenInformationByClass(hToken, TokenOwner, &pTokenOwner);
             if (dwRtnCode != ERROR_SUCCESS) {
               goto done;
             }
          -  pOwnerSid = pTokenUser->User.Sid;
          +  pOwnerSid = pTokenOwner->Owner;
           
             dwRtnCode = GetTokenInformationByClass(hToken, TokenPrimaryGroup,
                 &pTokenPrimaryGroup);
          @@ -1549,7 +1549,7 @@ done:
             if (hToken) {
               CloseHandle(hToken);
             }
          -  LocalFree(pTokenUser);
          +  LocalFree(pTokenOwner);
             LocalFree(pTokenPrimaryGroup);
             return dwRtnCode;
           }
          
          Show
          cnauroth Chris Nauroth added a comment - I'm attaching patch v007. This has one small change from v006, shown in the incremental diff below. Instead of basing ownership on the TokenUser of the current process, the new patch uses TokenOwner. According to documentation, this is the correct way to identify the default owner SID for newly created objects. http://msdn.microsoft.com/en-us/library/windows/desktop/aa379626%28v=vs.85%29.aspx More details are here: http://technet.microsoft.com/en-us/library/cc961992.aspx With this change, we correctly handle the special case that for files created by a member of the Administrators group, the ownership is set to the Administrators group instead of to the individual user SID. diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c b/hadoop-common-project/hadoop-c index c1dd914..040d563 100644 --- a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c +++ b/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c @@ -1515,7 +1515,7 @@ static DWORD GetWindowsDACLsForCreate(__in INT mode, __out PACL *ppDACL) { DWORD dwRtnCode = ERROR_SUCCESS; HANDLE hToken = NULL; DWORD dwSize = 0; - PTOKEN_USER pTokenUser = NULL; + PTOKEN_OWNER pTokenOwner = NULL; PTOKEN_PRIMARY_GROUP pTokenPrimaryGroup = NULL; PSID pOwnerSid = NULL, pGroupSid = NULL; PACL pDACL = NULL; @@ -1525,11 +1525,11 @@ static DWORD GetWindowsDACLsForCreate(__in INT mode, __out PACL *ppDACL) { goto done; } - dwRtnCode = GetTokenInformationByClass(hToken, TokenUser, &pTokenUser); + dwRtnCode = GetTokenInformationByClass(hToken, TokenOwner, &pTokenOwner); if (dwRtnCode != ERROR_SUCCESS) { goto done; } - pOwnerSid = pTokenUser->User.Sid; + pOwnerSid = pTokenOwner->Owner; dwRtnCode = GetTokenInformationByClass(hToken, TokenPrimaryGroup, &pTokenPrimaryGroup); @@ -1549,7 +1549,7 @@ done: if (hToken) { CloseHandle(hToken); } - LocalFree(pTokenUser); + LocalFree(pTokenOwner); LocalFree(pTokenPrimaryGroup); return dwRtnCode; }
          Hide
          cnauroth Chris Nauroth added a comment -

          Thanks for reviewing, Arpit.

          Good catch. If the directory does not end in a trailing backslash then should it be -13 instead?

          By the time execution reaches here, we have passed the path through java.io.File#getAbsolutePath. This appears to have the behavior of removing the trailing slash if present. Even if that weren't the case, the JDK code has no special handling for a trailing slash, so I think we're all clear.

          http://hg.openjdk.java.net/jdk7/modules/jdk/file/a37326fa7f95/src/windows/native/java/io/io_util_md.c#l146

          Show
          cnauroth Chris Nauroth added a comment - Thanks for reviewing, Arpit. Good catch. If the directory does not end in a trailing backslash then should it be -13 instead? By the time execution reaches here, we have passed the path through java.io.File#getAbsolutePath . This appears to have the behavior of removing the trailing slash if present. Even if that weren't the case, the JDK code has no special handling for a trailing slash, so I think we're all clear. http://hg.openjdk.java.net/jdk7/modules/jdk/file/a37326fa7f95/src/windows/native/java/io/io_util_md.c#l146
          Hide
          arpitagarwal Arpit Agarwal added a comment - - edited

          Hi Chris,

          +//  Checks if the path is longer than (MAX_PATH - 12) in which case it needs to
          +//  be prepended with \\?\ for Windows OS to understand it.  The -12 is to
          +//  account for an additional constraint for directories that it must be possible
          +//  to append an 8.3 file name.
          

          Good catch. If the directory does not end in a trailing backslash then should it be -13 instead?

          +1 otherwise.

          FWIW I agree the prior behavior of FileContext#mkdir looks buggy. There is a small chance of creating a security issue if someone was depending on the buggy behavior. Perhaps it could be emulated by issuing a setPermissions if the createFile fails.

          Show
          arpitagarwal Arpit Agarwal added a comment - - edited Hi Chris, + // Checks if the path is longer than (MAX_PATH - 12) in which case it needs to + // be prepended with \\?\ for Windows OS to understand it. The -12 is to + // account for an additional constraint for directories that it must be possible + // to append an 8.3 file name. Good catch. If the directory does not end in a trailing backslash then should it be -13 instead? +1 otherwise. FWIW I agree the prior behavior of FileContext#mkdir looks buggy. There is a small chance of creating a security issue if someone was depending on the buggy behavior. Perhaps it could be emulated by issuing a setPermissions if the createFile fails.
          Hide
          cnauroth Chris Nauroth added a comment -

          The Findbugs warnings and test failures are unrelated to this patch.

          Show
          cnauroth Chris Nauroth added a comment - The Findbugs warnings and test failures are unrelated to this patch.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12687025/HADOOP-11321.006.patch
          against trunk revision 9458cd5.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

          org.apache.hadoop.crypto.random.TestOsSecureRandom

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12687025/HADOOP-11321.006.patch against trunk revision 9458cd5. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: org.apache.hadoop.crypto.random.TestOsSecureRandom Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5260//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Thanks for the code review, Colin. I'm uploading patch v006 with the following changes.

          • ChecksumFileSystem: Removed the special case for calling setPermission. I agree that this shouldn't be necessary anymore.
          • NativeIO#createDirectoryWithMode: Changed to propagate the exception.
          • RawLocalFileSystem#mkOneDirWithMode: Changed to catch the exception and return false.
          • NativeIO.c: Changed to use #else instead of #ifdef UNIX and updated error messages.
          • libwinutils.c: I discovered a bug in our logic for deciding to convert to long path format. Normally, the length restriction is defined by MAX_PATH, which we already check. However, for directories, there is an additional requirement that it must be possible to append a child file name in 8.3 format (12 characters). This is documented about 1/3 of the page down here:
            http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx . The patch handles this the same way that the JDK code handles it, by checking for MAX_PATH - 12.
          • TestLocalDirsHandlerService: I discovered an odd behavior in FileContext#mkdir. You can pass in an FsPermission. If creation of the directory fails because it already exists, then we still call setPermission and potentially change the mode. This happens due to the code in the current trunk version of RawLocalFileSystem#primitiveMkdir. This seems like a bug. POSIX mkdir and FileSystem don't behave this way. By removing RawLocalFileSystem#primitiveMkdir in my patch, that behavior is going away. I found that TestLocalDirsHandlerService was depending on this old behavior, but it's probably just a coincidence that the test suite keeps reusing the same directories across multiple tests. I changed the test suite to clean up in between tests. If anyone thinks this behavior of FileContext is intentional, please let me know.

          I guess this is a nitpick, but perhaps we should only set ppSD when the return code is success?

          At this point in the code, it's guaranteed that the return code is success. The earlier code paths check the individual API calls for errors, and if found, jump to the done label, going past the assignment to ppSD. (I do agree with the principle that functions should only write to an out parameter if it succeeds.)

          Show
          cnauroth Chris Nauroth added a comment - Thanks for the code review, Colin. I'm uploading patch v006 with the following changes. ChecksumFileSystem : Removed the special case for calling setPermission . I agree that this shouldn't be necessary anymore. NativeIO#createDirectoryWithMode : Changed to propagate the exception. RawLocalFileSystem#mkOneDirWithMode : Changed to catch the exception and return false . NativeIO.c : Changed to use #else instead of #ifdef UNIX and updated error messages. libwinutils.c : I discovered a bug in our logic for deciding to convert to long path format. Normally, the length restriction is defined by MAX_PATH , which we already check. However, for directories, there is an additional requirement that it must be possible to append a child file name in 8.3 format (12 characters). This is documented about 1/3 of the page down here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx . The patch handles this the same way that the JDK code handles it, by checking for MAX_PATH - 12 . TestLocalDirsHandlerService : I discovered an odd behavior in FileContext#mkdir . You can pass in an FsPermission . If creation of the directory fails because it already exists, then we still call setPermission and potentially change the mode. This happens due to the code in the current trunk version of RawLocalFileSystem#primitiveMkdir . This seems like a bug. POSIX mkdir and FileSystem don't behave this way. By removing RawLocalFileSystem#primitiveMkdir in my patch, that behavior is going away. I found that TestLocalDirsHandlerService was depending on this old behavior, but it's probably just a coincidence that the test suite keeps reusing the same directories across multiple tests. I changed the test suite to clean up in between tests. If anyone thinks this behavior of FileContext is intentional, please let me know. I guess this is a nitpick, but perhaps we should only set ppSD when the return code is success? At this point in the code, it's guaranteed that the return code is success. The earlier code paths check the individual API calls for errors, and if found, jump to the done label, going past the assignment to ppSD . (I do agree with the principle that functions should only write to an out parameter if it succeeds.)
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12686966/HADOOP-11321.005.patch
          against trunk revision 7784b10.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.yarn.server.nodemanager.TestLocalDirsHandlerService

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12686966/HADOOP-11321.005.patch against trunk revision 7784b10. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 23 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.yarn.server.nodemanager.TestLocalDirsHandlerService Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-nodemanager.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5259//console This message is automatically generated.
          Hide
          cmccabe Colin P. McCabe added a comment - - edited

          Great. Let me take a look.

          -    if (permission != null) {
          +    // A local file system implementation may choose to create the file and set
          +    // permissions immediately in a single syscall.  If so, then skip setting
          +    // permissions here.
          +    if (permission != null && !(fs instanceof RawLocalFileSystem &&
          +        Path.WINDOWS && NativeIO.isAvailable())) {
          

          Do we need this part? Earlier in this function, we invoke the FileSystem#create method that takes an FsPermission, right? Then the FileSystem handles it atomically (hopefully) or non-atomically (ew). Surely we're not going to try to work around FileSystem instances that don't honor the permission passed to create? (And surely we don't have any of those... do we?)

              /**
               * Create a directory with permissions set to the specified mode.  By setting
               * permissions at creation time, we avoid issues related to the user lacking
               * WRITE_DAC rights on subsequent chmod calls.  One example where this can
               * occur is writing to an SMB share where the user does not have Full Control
               * rights, and therefore WRITE_DAC is denied.  This method mimics the
               * contract of {@link java.io.File#mkdir()}.  All exceptions are caught and
               * reported by returning {@code false} to the caller.
               *
               * @param path directory to create
               * @param mode permissions of new directory
               * @return boolean true if directory creation succeeded
               */
              public static boolean createDirectoryWithMode(File path, int mode) {
          

          I think we should just pass the exception through here. We don't want to keep people guessing if there is an I/O error. Just like the new JDK7 File functions throw an IOException on error, we should throw an IOE if the IO can't be done. Then the caller can catch it and ignore it if appropriate. I realize we need the (bad) "return false" semantics in FileSystem, but by putting this here, we encourage people to use it for other stuff.

          #ifdef UNIX
            THROW(env, "java/io/IOException",
              "The function Windows.createDirectoryWithMode0() is not supported on Unix");
          #endif
          
          #ifdef WINDOWS
          ...
          

          Would it make more sense to have an #else after the #ifdef WINDOWS clause? [edit: and change the message to say "is not supported on this platform" rather than "not supported on Unix"]

          in libwinutils.c:

            *ppSD = pSD;
          
          done:
            if (dwRtnCode != ERROR_SUCCESS) {
              LocalFree(pSD);
            }
          

          I guess this is a nitpick, but perhaps we should only set ppSD when the return code is success? Of course it won't matter for callers that check the return code, as they should.

          Show
          cmccabe Colin P. McCabe added a comment - - edited Great. Let me take a look. - if (permission != null ) { + // A local file system implementation may choose to create the file and set + // permissions immediately in a single syscall. If so, then skip setting + // permissions here. + if (permission != null && !(fs instanceof RawLocalFileSystem && + Path.WINDOWS && NativeIO.isAvailable())) { Do we need this part? Earlier in this function, we invoke the FileSystem#create method that takes an FsPermission , right? Then the FileSystem handles it atomically (hopefully) or non-atomically (ew). Surely we're not going to try to work around FileSystem instances that don't honor the permission passed to create ? (And surely we don't have any of those... do we?) /** * Create a directory with permissions set to the specified mode. By setting * permissions at creation time, we avoid issues related to the user lacking * WRITE_DAC rights on subsequent chmod calls. One example where this can * occur is writing to an SMB share where the user does not have Full Control * rights, and therefore WRITE_DAC is denied. This method mimics the * contract of {@link java.io.File#mkdir()}. All exceptions are caught and * reported by returning {@code false } to the caller. * * @param path directory to create * @param mode permissions of new directory * @ return boolean true if directory creation succeeded */ public static boolean createDirectoryWithMode(File path, int mode) { I think we should just pass the exception through here. We don't want to keep people guessing if there is an I/O error. Just like the new JDK7 File functions throw an IOException on error, we should throw an IOE if the IO can't be done. Then the caller can catch it and ignore it if appropriate. I realize we need the (bad) "return false" semantics in FileSystem, but by putting this here, we encourage people to use it for other stuff. #ifdef UNIX THROW(env, "java/io/IOException" , "The function Windows.createDirectoryWithMode0() is not supported on Unix" ); #endif #ifdef WINDOWS ... Would it make more sense to have an #else after the #ifdef WINDOWS clause? [edit: and change the message to say "is not supported on this platform" rather than "not supported on Unix"] in libwinutils.c : *ppSD = pSD; done: if (dwRtnCode != ERROR_SUCCESS) { LocalFree(pSD); } I guess this is a nitpick, but perhaps we should only set ppSD when the return code is success? Of course it won't matter for callers that check the return code, as they should.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm uploading patch v005 to fix a bug in my error handling for INVALID_HANDLE_VALUE. This is the diff since v004:

          diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c b/hadoop-common-project/hadoop-comm
          index b006f17..c63ea3f 100644
          --- a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          +++ b/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
          @@ -1696,10 +1696,13 @@ DWORD CreateFileWithMode(__in LPCWSTR lpPath, __in DWORD dwDesiredAccess,
           
             hFile = CreateFileW(lpLongPath, dwDesiredAccess, dwShareMode, &sa,
               dwCreationDisposition, dwFlagsAndAttributes, NULL);
          -  if (hFile != INVALID_HANDLE_VALUE) {
          -    *pHFile = hFile;
          +  if (hFile == INVALID_HANDLE_VALUE) {
          +    dwRtnCode = GetLastError();
          +    goto done;
             }
           
          +  *pHFile = hFile;
          +
           done:
             LocalFree(lpLongPath);
             LocalFree(pDACL);
          
          Show
          cnauroth Chris Nauroth added a comment - I'm uploading patch v005 to fix a bug in my error handling for INVALID_HANDLE_VALUE . This is the diff since v004: diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c b/hadoop-common-project/hadoop-comm index b006f17..c63ea3f 100644 --- a/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c +++ b/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c @@ -1696,10 +1696,13 @@ DWORD CreateFileWithMode(__in LPCWSTR lpPath, __in DWORD dwDesiredAccess, hFile = CreateFileW(lpLongPath, dwDesiredAccess, dwShareMode, &sa, dwCreationDisposition, dwFlagsAndAttributes, NULL); - if (hFile != INVALID_HANDLE_VALUE) { - *pHFile = hFile; + if (hFile == INVALID_HANDLE_VALUE) { + dwRtnCode = GetLastError(); + goto done; } + *pHFile = hFile; + done: LocalFree(lpLongPath); LocalFree(pDACL);
          Hide
          cnauroth Chris Nauroth added a comment -

          Remus Rusanu, thank you for the clarifications.

          Findbugs is a known, unrelated issue right now. I could not repro the javac and javadoc warnings locally, so it seems like something was strange about that Jenkins run. Let's continue with code review, and I'll try another Jenkins run at a later time.

          Show
          cnauroth Chris Nauroth added a comment - Remus Rusanu , thank you for the clarifications. Findbugs is a known, unrelated issue right now. I could not repro the javac and javadoc warnings locally, so it seems like something was strange about that Jenkins run. Let's continue with code review, and I'll try another Jenkins run at a later time.
          Hide
          rusanu Remus Rusanu added a comment -

          The WSCE file operations should only apply to container launch/localization. Even if all the NM operations all succeed on a SMB share, when the impersonated container launched by WSCE will fail to launch on a SMB share because it implies Kerberos delegation and the S4U token used by the WSCE impersonation model does not support delegation.

          Show
          rusanu Remus Rusanu added a comment - The WSCE file operations should only apply to container launch/localization. Even if all the NM operations all succeed on a SMB share, when the impersonated container launched by WSCE will fail to launch on a SMB share because it implies Kerberos delegation and the S4U token used by the WSCE impersonation model does not support delegation.
          Hide
          rusanu Remus Rusanu added a comment -

          The WSCE has no versioning in the IDL. As the calls are always local, it is expected that the binaries deployed are self-consistent.

          Show
          rusanu Remus Rusanu added a comment - The WSCE has no versioning in the IDL. As the calls are always local, it is expected that the binaries deployed are self-consistent.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12686166/HADOOP-11321.004.patch
          against trunk revision 2ed90a5.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 javac. The applied patch generated 1216 javac compiler warnings (more than the trunk's current 156 warnings).

          -1 javadoc. The javadoc tool appears to have generated 49 warning messages.
          See https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 40 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12686166/HADOOP-11321.004.patch against trunk revision 2ed90a5. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac . The applied patch generated 1216 javac compiler warnings (more than the trunk's current 156 warnings). -1 javadoc . The javadoc tool appears to have generated 49 warning messages. See https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 40 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//artifact/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5221//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Patch v004 is a rebase after the HADOOP-11349 changes went in to RawLocalFileSystem.

          Show
          cnauroth Chris Nauroth added a comment - Patch v004 is a rebase after the HADOOP-11349 changes went in to RawLocalFileSystem .
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12686133/HADOOP-11321.003.patch
          against trunk revision 03867eb.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5219//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12686133/HADOOP-11321.003.patch against trunk revision 03867eb. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5219//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          Patch v3 implements a native code solution for creating local file system directories and files with an initial security descriptor equivalent to the mode passed by the caller. I have tested this locally. It fixes the original reported problem for a Windows client accessing an SMB share. For a Linux client accessing an SMB share, there is no change in behavior in the Hadoop code, and it still works fine via the error handling in Linux CIFS mounts (silently ignoring the chmod failure).

          Here is a description of the changes.

          • RawLocalFileSystem: On Windows, make a JNI call to open a file descriptor for write instead of using java.io.FileOutputStream. There are 2 new methods that can be overridden in subclasses: mkOneDirWithMode and createOutputStreamWithMode. These are intended to support ElevatedRawLocalFileSystem, discussed in more detail below. We cannot remove the older protected methods mkOneDir and createOutputStream, because the class is annotated Public and Stable. These are now implemented as simple forwarding methods.
          • ChecksumFileSystem: This needed to change some of the file creation calls it uses on the wrapped file system to pass down the mode. There is one awkward point here that required an instanceof check, but I think it's preferable over adding more abstract methods to FileSystem just to cover one special case. Several calls are rerouted to equivalent overloaded methods that can accept mode as an argument.
          • NativeIO: Added Windows-specific JNI wrappers for a native implementation of directory and file creation. For file creation, I needed to be careful to match the semantics that were implemented by java.io.FileOutputStream. This involved reading a bit of the JDK code, and I've described the details in comments.
          • NativeIO.c: Added JNI implementations for the 2 new functions. These are pretty boring and just pass through to the real action in...
          • libwinutils.c: This contains the implementation of calling the relevant Windows APIs. I've reused our implementation of translating a mode to a discretionary access control list (GetWindowsDACLs), so this will be consistent with, for example, winutils.exe chmod.
          • WindowsSecureContainerExecutor: This needed some adjustment to be able to handle the changes required in RawLocalFileSystem. Remus Rusanu, this implementation still does creation and setting permissions in separate syscalls. I suppose we could consolidate to a single syscall if we update the IDL for the RPC protocol to accept a mode argument, but I didn't want to tackle that in this patch. I also don't know the compatibility constraints around versioning this RPC protocol. This change was focused on a bug fix related to SMB shares. I don't believe this is a major limitation for ElevatedRawLocalFileSystem. If NodeManager is running backed by an SMB share for the localization directory, then there are probably lots of other problems anyway. If you think it's important to come back to this, please file a separate jira for that.
          Show
          cnauroth Chris Nauroth added a comment - Patch v3 implements a native code solution for creating local file system directories and files with an initial security descriptor equivalent to the mode passed by the caller. I have tested this locally. It fixes the original reported problem for a Windows client accessing an SMB share. For a Linux client accessing an SMB share, there is no change in behavior in the Hadoop code, and it still works fine via the error handling in Linux CIFS mounts (silently ignoring the chmod failure). Here is a description of the changes. RawLocalFileSystem : On Windows, make a JNI call to open a file descriptor for write instead of using java.io.FileOutputStream . There are 2 new methods that can be overridden in subclasses: mkOneDirWithMode and createOutputStreamWithMode . These are intended to support ElevatedRawLocalFileSystem , discussed in more detail below. We cannot remove the older protected methods mkOneDir and createOutputStream , because the class is annotated Public and Stable . These are now implemented as simple forwarding methods. ChecksumFileSystem : This needed to change some of the file creation calls it uses on the wrapped file system to pass down the mode. There is one awkward point here that required an instanceof check, but I think it's preferable over adding more abstract methods to FileSystem just to cover one special case. Several calls are rerouted to equivalent overloaded methods that can accept mode as an argument. NativeIO : Added Windows-specific JNI wrappers for a native implementation of directory and file creation. For file creation, I needed to be careful to match the semantics that were implemented by java.io.FileOutputStream . This involved reading a bit of the JDK code, and I've described the details in comments. NativeIO.c : Added JNI implementations for the 2 new functions. These are pretty boring and just pass through to the real action in... libwinutils.c : This contains the implementation of calling the relevant Windows APIs. I've reused our implementation of translating a mode to a discretionary access control list ( GetWindowsDACLs ), so this will be consistent with, for example, winutils.exe chmod . WindowsSecureContainerExecutor : This needed some adjustment to be able to handle the changes required in RawLocalFileSystem . Remus Rusanu , this implementation still does creation and setting permissions in separate syscalls. I suppose we could consolidate to a single syscall if we update the IDL for the RPC protocol to accept a mode argument, but I didn't want to tackle that in this patch. I also don't know the compatibility constraints around versioning this RPC protocol. This change was focused on a bug fix related to SMB shares. I don't believe this is a major limitation for ElevatedRawLocalFileSystem . If NodeManager is running backed by an SMB share for the localization directory, then there are probably lots of other problems anyway. If you think it's important to come back to this, please file a separate jira for that.
          Hide
          cnauroth Chris Nauroth added a comment -

          The solution is in the samba case is just to open up your umask and then the create will succeed with the desired permission.

          Technically, we're not going to need any Hadoop code changes or changes to umask to support a Linux client with a Samba mount. This works (or at least it doesn't fail the client), because CIFS mounts are implemented to silently ignore chmod failures, as described in my earlier comment.

          Let me know if you want help on the fchmod thing... we could split this JIRA into one for Windows and one for Linux. Or maybe it makes sense to keep it all here, and just do it for both platforms in this patch?

          Thanks for volunteering, Colin P. McCabe. I'm about to post a patch shortly. Based on what I have so far, I suspect the Linux fchmod support will be a fairly small incremental thing. Let me know your thoughts after you see the patch, and then we can make a call on whether or not to roll it all into one patch here or split it out separately.

          Show
          cnauroth Chris Nauroth added a comment - The solution is in the samba case is just to open up your umask and then the create will succeed with the desired permission. Technically, we're not going to need any Hadoop code changes or changes to umask to support a Linux client with a Samba mount. This works (or at least it doesn't fail the client), because CIFS mounts are implemented to silently ignore chmod failures, as described in my earlier comment. Let me know if you want help on the fchmod thing... we could split this JIRA into one for Windows and one for Linux. Or maybe it makes sense to keep it all here, and just do it for both platforms in this patch? Thanks for volunteering, Colin P. McCabe . I'm about to post a patch shortly. Based on what I have so far, I suspect the Linux fchmod support will be a fairly small incremental thing. Let me know your thoughts after you see the patch, and then we can make a call on whether or not to roll it all into one patch here or split it out separately.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Awesome. Let me know if you want help on the fchmod thing... we could split this JIRA into one for Windows and one for Linux. Or maybe it makes sense to keep it all here, and just do it for both platforms in this patch?

          I reviewed HADOOP-11349, which is a small fix for just the FD leak (good find)

          Show
          cmccabe Colin P. McCabe added a comment - Awesome. Let me know if you want help on the fchmod thing... we could split this JIRA into one for Windows and one for Linux. Or maybe it makes sense to keep it all here, and just do it for both platforms in this patch? I reviewed HADOOP-11349 , which is a small fix for just the FD leak (good find)
          Hide
          cnauroth Chris Nauroth added a comment -

          For Linux (and in general anything implementing POSIX.1-2001), perhaps we can just create the file with the requested permission, and then use fchmod on the file descriptor to "open it up" by setting that same permission.

          That sounds great. I think we've got it now! It's a shame we couldn't cut down to 1 syscall, but it seems like this is what it takes to maintain the current contract. Thanks, Colin.

          Show
          cnauroth Chris Nauroth added a comment - For Linux (and in general anything implementing POSIX.1-2001), perhaps we can just create the file with the requested permission, and then use fchmod on the file descriptor to "open it up" by setting that same permission. That sounds great. I think we've got it now! It's a shame we couldn't cut down to 1 syscall, but it seems like this is what it takes to maintain the current contract. Thanks, Colin.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Those are good points.

          It sounds like for Windows, we can go with the solution you're looking at now. For Linux (and in general anything implementing POSIX.1-2001), perhaps we can just create the file with the requested permission, and then use fchmod on the file descriptor to "open it up" by setting that same permission. Then we get the best of all worlds... there is no point at which the permissions are more open then we desire, and we ignore umask (except on filesystems like samba where fchmod silently fails, but we can't solve that anyway... and it never worked in the past.) The solution is in the samba case is just to open up your umask and then the create will succeed with the desired permission.

          Show
          cmccabe Colin P. McCabe added a comment - Those are good points. It sounds like for Windows, we can go with the solution you're looking at now. For Linux (and in general anything implementing POSIX.1-2001), perhaps we can just create the file with the requested permission, and then use fchmod on the file descriptor to "open it up" by setting that same permission. Then we get the best of all worlds... there is no point at which the permissions are more open then we desire, and we ignore umask (except on filesystems like samba where fchmod silently fails, but we can't solve that anyway... and it never worked in the past.) The solution is in the samba case is just to open up your umask and then the create will succeed with the desired permission.
          Hide
          cnauroth Chris Nauroth added a comment -

          While reviewing the existing code, I discovered another bug in the 2-step file creation followed by setting permission. If creation succeeds, but setting permission fails with an exception, then we leak a file descriptor. I don't plan to fix it here, so I filed HADOOP-11349 for follow-up. Depending on where we go with the atomic create-with-permissions discussion, it might supersede HADOOP-11349.

          Show
          cnauroth Chris Nauroth added a comment - While reviewing the existing code, I discovered another bug in the 2-step file creation followed by setting permission. If creation succeeds, but setting permission fails with an exception, then we leak a file descriptor. I don't plan to fix it here, so I filed HADOOP-11349 for follow-up. Depending on where we go with the atomic create-with-permissions discussion, it might supersede HADOOP-11349 .
          Hide
          cnauroth Chris Nauroth added a comment -

          This seems like a bug in FileSystem-- can you open a follow-on JIRA?

          I filed HADOOP-11347. I agree that it needs to be targeted to 3.0, because applications might depend on the current behavior.

          This still seems like incorrect behavior to me. If someone asks us for a restrictive set of permissions and we create the file with a more relaxed set, that seems insecure.

          I just had an a-ha moment reading the man page for CIFS mounts:

          http://linux.die.net/man/8/mount.cifs

          File And Directory Ownership And Permissions

          The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. Attempting to change these values via chmod/chown will return success but have no effect.

          This explains the behaviors I've seen in Cygwin and SMB shares mounted on Linux clients. While this can be seen as a security hole, it's a known issue in the CIFS protocol, and we're not going to be able to do anything to change that.

          I think maybe the right thing to do is to issue a LOG.info if the process umask doesn't match the configured value of fs.permissions.umask-mode. Most users are probably running with a umask that matches fs.permissions.umask-mode anyway. If they aren't, the only problem is that the LocalFilesystem will create files with more restrictive permissions than intended. This isn't a security hole, just a hassle, since things are just more locked down than intended. And it should be easy to fix if the user reads the logs and realizes his umask is wrong.

          I suppose we can do this, but there is a risk that it would be backwards-incompatible. I think this is something else that can't be done on the 2.x line and would have to wait for 3.0. We can't anticipate if there are applications running with a 027 process umask and relying on the alternative contract of FileSystem to open up read access for other on new files. After deploying the change in those environments, others would no longer be able to access new files. As you said, this one is not a security problem, but it is a functionality problem. Users would have to respond by either adding explicit chmod calls or relaxing their process umask (and possibly compromising security concerns in other parts of their code).

          Regarding the info logging, we might just have to skip it, or at least suppress it when used from the fs shell. No one wants log output when running the shell commands.

          If you want to just do the Windows part here, I am +1 on that. But I think we should also do this for Linux. We should eliminate this race condition on all platforms. So if you choose to just do Windows here, I'll open a follow-on JIRA for Linux.

          Thanks, Colin. My focus is here is just getting functionality working on 2.x for both Linux and Windows. This is going to restore functionality that had been working in 1.x. My current assessment is that the patch will impact Windows code only. I'm not focused on the atomicity problem in the current code. Please feel free to file a follow-up, but again, I believe it must target 3.0.

          Show
          cnauroth Chris Nauroth added a comment - This seems like a bug in FileSystem-- can you open a follow-on JIRA? I filed HADOOP-11347 . I agree that it needs to be targeted to 3.0, because applications might depend on the current behavior. This still seems like incorrect behavior to me. If someone asks us for a restrictive set of permissions and we create the file with a more relaxed set, that seems insecure. I just had an a-ha moment reading the man page for CIFS mounts: http://linux.die.net/man/8/mount.cifs File And Directory Ownership And Permissions The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. Attempting to change these values via chmod/chown will return success but have no effect. This explains the behaviors I've seen in Cygwin and SMB shares mounted on Linux clients. While this can be seen as a security hole, it's a known issue in the CIFS protocol, and we're not going to be able to do anything to change that. I think maybe the right thing to do is to issue a LOG.info if the process umask doesn't match the configured value of fs.permissions.umask-mode. Most users are probably running with a umask that matches fs.permissions.umask-mode anyway. If they aren't, the only problem is that the LocalFilesystem will create files with more restrictive permissions than intended. This isn't a security hole, just a hassle, since things are just more locked down than intended. And it should be easy to fix if the user reads the logs and realizes his umask is wrong. I suppose we can do this, but there is a risk that it would be backwards-incompatible. I think this is something else that can't be done on the 2.x line and would have to wait for 3.0. We can't anticipate if there are applications running with a 027 process umask and relying on the alternative contract of FileSystem to open up read access for other on new files. After deploying the change in those environments, others would no longer be able to access new files. As you said, this one is not a security problem, but it is a functionality problem. Users would have to respond by either adding explicit chmod calls or relaxing their process umask (and possibly compromising security concerns in other parts of their code). Regarding the info logging, we might just have to skip it, or at least suppress it when used from the fs shell. No one wants log output when running the shell commands. If you want to just do the Windows part here, I am +1 on that. But I think we should also do this for Linux. We should eliminate this race condition on all platforms. So if you choose to just do Windows here, I'll open a follow-on JIRA for Linux. Thanks, Colin. My focus is here is just getting functionality working on 2.x for both Linux and Windows. This is going to restore functionality that had been working in 1.x. My current assessment is that the patch will impact Windows code only. I'm not focused on the atomicity problem in the current code. Please feel free to file a follow-up, but again, I believe it must target 3.0.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Actually, it's more confusing than that, because we're not consistent about it. It looks like file creations always apply fs.permissions.umask-mode. For directories, FileContext applies it, but FileSystem doesn't. This means, for example, that "hadoop fs -mkdir" on the local file system is in fact governed by the process umask.

          This seems like a bug in FileSystem-- can you open a follow-on JIRA? It may be that we can only make the fix in 3.0, but we should still fix it.

          Another interesting thing I found after further experimentation is that the problem does not repro for an SMB share mounted on Linux. A chmod call "succeeds" without error, but simply does not change the permissions. The error handling seems to be specific to the OS client. This may in fact turn out to be a Windows-only bug, contrary to my prior statement.

          This still seems like incorrect behavior to me. If someone asks us for a restrictive set of permissions and we create the file with a more relaxed set, that seems insecure.

          I'm not aware of any Unix file/directory creation syscalls that let you bypass the umask. That would mean achieving atomic create-and-set-permissions would require a native umask(0) call. I'm very reluctant to do that, because we can't predict how this might compromise existing applications, especially for applications that use a mix of Hadoop and their own file creation calls. I suppose another possibility is to fork another process to do its own umask(0), but then we'd have a lot of process creation overhead.

          I think maybe the right thing to do is to issue a LOG.info if the process umask doesn't match the configured value of fs.permissions.umask-mode. Most users are probably running with a umask that matches fs.permissions.umask-mode anyway. If they aren't, the only problem is that the LocalFilesystem will create files with more restrictive permissions than intended. This isn't a security hole, just a hassle, since things are just more locked down than intended. And it should be easy to fix if the user reads the logs and realizes his umask is wrong.

          Considering all of that, I'm currently pursuing a Windows-only native code implementation, with Linux continuing to run the existing code path. I believe this can work, because Windows does not have a process umask or anything equivalent that would interfere with the intention of fs.permissions.umask-mode. Unfortunately, creations on Linux would still be subject to the race condition between creat/mkdir and chmod that we have in today's code, but at least the situation wouldn't get any worse.

          If you want to just do the Windows part here, I am +1 on that. But I think we should also do this for Linux. We should eliminate this race condition on all platforms. So if you choose to just do Windows here, I'll open a follow-on JIRA for Linux.

          Thanks, Chris Nauroth!

          Show
          cmccabe Colin P. McCabe added a comment - Actually, it's more confusing than that, because we're not consistent about it. It looks like file creations always apply fs.permissions.umask-mode. For directories, FileContext applies it, but FileSystem doesn't. This means, for example, that "hadoop fs -mkdir" on the local file system is in fact governed by the process umask. This seems like a bug in FileSystem -- can you open a follow-on JIRA? It may be that we can only make the fix in 3.0, but we should still fix it. Another interesting thing I found after further experimentation is that the problem does not repro for an SMB share mounted on Linux. A chmod call "succeeds" without error, but simply does not change the permissions. The error handling seems to be specific to the OS client. This may in fact turn out to be a Windows-only bug, contrary to my prior statement. This still seems like incorrect behavior to me. If someone asks us for a restrictive set of permissions and we create the file with a more relaxed set, that seems insecure. I'm not aware of any Unix file/directory creation syscalls that let you bypass the umask. That would mean achieving atomic create-and-set-permissions would require a native umask(0) call. I'm very reluctant to do that, because we can't predict how this might compromise existing applications, especially for applications that use a mix of Hadoop and their own file creation calls. I suppose another possibility is to fork another process to do its own umask(0), but then we'd have a lot of process creation overhead. I think maybe the right thing to do is to issue a LOG.info if the process umask doesn't match the configured value of fs.permissions.umask-mode . Most users are probably running with a umask that matches fs.permissions.umask-mode anyway. If they aren't, the only problem is that the LocalFilesystem will create files with more restrictive permissions than intended. This isn't a security hole, just a hassle, since things are just more locked down than intended. And it should be easy to fix if the user reads the logs and realizes his umask is wrong. Considering all of that, I'm currently pursuing a Windows-only native code implementation, with Linux continuing to run the existing code path. I believe this can work, because Windows does not have a process umask or anything equivalent that would interfere with the intention of fs.permissions.umask-mode. Unfortunately, creations on Linux would still be subject to the race condition between creat/mkdir and chmod that we have in today's code, but at least the situation wouldn't get any worse. If you want to just do the Windows part here, I am +1 on that. But I think we should also do this for Linux. We should eliminate this race condition on all platforms. So if you choose to just do Windows here, I'll open a follow-on JIRA for Linux. Thanks, Chris Nauroth !
          Hide
          cnauroth Chris Nauroth added a comment -

          Unfortunately, I've discovered a snag in the pure native code approach: the process umask. The Hadoop file system APIs have established a contract that permissions on newly created objects are governed not by the process umask, but rather by the fs.permissions.umask-mode configuration property. On the local file system, this is implemented by separate syscalls for creat/mkdir followed by chmod. This guarantees that if the caller asks for 644, then it gets 644, even if the process umask is 077.

          Actually, it's more confusing than that, because we're not consistent about it. It looks like file creations always apply fs.permissions.umask-mode. For directories, FileContext applies it, but FileSystem doesn't. This means, for example, that "hadoop fs -mkdir" on the local file system is in fact governed by the process umask.

          Another interesting thing I found after further experimentation is that the problem does not repro for an SMB share mounted on Linux. A chmod call "succeeds" without error, but simply does not change the permissions. The error handling seems to be specific to the OS client. This may in fact turn out to be a Windows-only bug, contrary to my prior statement.

          I'm not aware of any Unix file/directory creation syscalls that let you bypass the umask. That would mean achieving atomic create-and-set-permissions would require a native umask(0) call. I'm very reluctant to do that, because we can't predict how this might compromise existing applications, especially for applications that use a mix of Hadoop and their own file creation calls. I suppose another possibility is to fork another process to do its own umask(0), but then we'd have a lot of process creation overhead.

          Considering all of that, I'm currently pursuing a Windows-only native code implementation, with Linux continuing to run the existing code path. I believe this can work, because Windows does not have a process umask or anything equivalent that would interfere with the intention of fs.permissions.umask-mode. Unfortunately, creations on Linux would still be subject to the race condition between creat/mkdir and chmod that we have in today's code, but at least the situation wouldn't get any worse.

          Show
          cnauroth Chris Nauroth added a comment - Unfortunately, I've discovered a snag in the pure native code approach: the process umask. The Hadoop file system APIs have established a contract that permissions on newly created objects are governed not by the process umask, but rather by the fs.permissions.umask-mode configuration property. On the local file system, this is implemented by separate syscalls for creat / mkdir followed by chmod . This guarantees that if the caller asks for 644, then it gets 644, even if the process umask is 077. Actually, it's more confusing than that, because we're not consistent about it. It looks like file creations always apply fs.permissions.umask-mode . For directories, FileContext applies it, but FileSystem doesn't. This means, for example, that "hadoop fs -mkdir" on the local file system is in fact governed by the process umask. Another interesting thing I found after further experimentation is that the problem does not repro for an SMB share mounted on Linux. A chmod call "succeeds" without error, but simply does not change the permissions. The error handling seems to be specific to the OS client. This may in fact turn out to be a Windows-only bug, contrary to my prior statement. I'm not aware of any Unix file/directory creation syscalls that let you bypass the umask. That would mean achieving atomic create-and-set-permissions would require a native umask(0) call. I'm very reluctant to do that, because we can't predict how this might compromise existing applications, especially for applications that use a mix of Hadoop and their own file creation calls. I suppose another possibility is to fork another process to do its own umask(0) , but then we'd have a lot of process creation overhead. Considering all of that, I'm currently pursuing a Windows-only native code implementation, with Linux continuing to run the existing code path. I believe this can work, because Windows does not have a process umask or anything equivalent that would interfere with the intention of fs.permissions.umask-mode . Unfortunately, creations on Linux would still be subject to the race condition between creat / mkdir and chmod that we have in today's code, but at least the situation wouldn't get any worse.
          Hide
          cmccabe Colin P. McCabe added a comment -

          Awesome. Thanks for working on this, Chris.

          Show
          cmccabe Colin P. McCabe added a comment - Awesome. Thanks for working on this, Chris.
          Hide
          cnauroth Chris Nauroth added a comment - - edited

          Is there a native API on Windows that creates a file with certain permissions?

          I think this would be CreateFile with the fourth argument containing a security descriptor with a DACL and CreateDirectory with the same in the second argument. I had thought this would still get blocked for lack of the WRITE_DAC right, but after playing around with this more, it seems like WRITE_DAC is only enforced on modification of an existing object, not on creation of a new file or directory. Maybe I had an incorrect assumption.

          I'm attaching a demo patch with some scratch code that shows how this works for directory creation. We'd need to plug something like this into JNI calls. On Linux, we'd need to do the equivalent for creat and mkdir.

          I agree with the other benefits of this approach you mentioned, Colin. I'm going to experiment with this more and see if it's feasible. I'll report back later.

          Show
          cnauroth Chris Nauroth added a comment - - edited Is there a native API on Windows that creates a file with certain permissions? I think this would be CreateFile with the fourth argument containing a security descriptor with a DACL and CreateDirectory with the same in the second argument. I had thought this would still get blocked for lack of the WRITE_DAC right, but after playing around with this more, it seems like WRITE_DAC is only enforced on modification of an existing object, not on creation of a new file or directory. Maybe I had an incorrect assumption. I'm attaching a demo patch with some scratch code that shows how this works for directory creation. We'd need to plug something like this into JNI calls. On Linux, we'd need to do the equivalent for creat and mkdir . I agree with the other benefits of this approach you mentioned, Colin. I'm going to experiment with this more and see if it's feasible. I'll report back later.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I guess my concern is that if the user requests a restrictive permission on the newly created file, and the system denies it, sets a more permissive one instead, and returns no error... that would be considered a security hole. I could imagine malicious users playing games with moving newly created files around rapidly so that some user got an EACESS, and silently failed to set the permission the file they created. After all, it's possible for a user to have write permission on a directory, without having read permission on any of the files inside of it. And this gives the bad guy the ability to move around files as he likes.

          To be honest, I really don't like the way we do this right now, even independently of your patch. Creating the file and setting the permissions on the file should not be two separate steps. The way we do it now opens us up to race conditions and possibly worse. And of course, it's less efficient to do two syscalls rather than one.

          There is a FileInputStream constructor that takes a FileDescriptor. This might allow us to do the "open" and "set permissions" steps as a single atomic step. I'm not familiar with the APIs on Windows, but certainly on Linux, the creat syscall takes an argument that can set the permissions of what it is creating. Is there a native API on Windows that creates a file with certain permissions? Even if Files#createFile doesn't work on Windows, we must have some JNI function we can write that does? This would fix the race condition and eliminate the problem you're describing here, at one go. Sure, we might have to use different methods of getting the FileDsecriptor on Linux and Windows, but that seems OK.

          cygwin is an interesting precedent, but have you confirmed that it is doing the same thing as we are (creating the file and setting permissions in two separate steps?) I would be surprised if it was. It's ok to create the file or directory with more restrictive permissions than what was requested, but silently opening up the permissions is not ok, I think.

          Show
          cmccabe Colin P. McCabe added a comment - I guess my concern is that if the user requests a restrictive permission on the newly created file, and the system denies it, sets a more permissive one instead, and returns no error... that would be considered a security hole. I could imagine malicious users playing games with moving newly created files around rapidly so that some user got an EACESS, and silently failed to set the permission the file they created. After all, it's possible for a user to have write permission on a directory, without having read permission on any of the files inside of it. And this gives the bad guy the ability to move around files as he likes. To be honest, I really don't like the way we do this right now, even independently of your patch. Creating the file and setting the permissions on the file should not be two separate steps. The way we do it now opens us up to race conditions and possibly worse. And of course, it's less efficient to do two syscalls rather than one. There is a FileInputStream constructor that takes a FileDescriptor . This might allow us to do the "open" and "set permissions" steps as a single atomic step. I'm not familiar with the APIs on Windows, but certainly on Linux, the creat syscall takes an argument that can set the permissions of what it is creating. Is there a native API on Windows that creates a file with certain permissions? Even if Files#createFile doesn't work on Windows, we must have some JNI function we can write that does? This would fix the race condition and eliminate the problem you're describing here, at one go. Sure, we might have to use different methods of getting the FileDsecriptor on Linux and Windows, but that seems OK. cygwin is an interesting precedent, but have you confirmed that it is doing the same thing as we are (creating the file and setting permissions in two separate steps?) I would be surprised if it was. It's ok to create the file or directory with more restrictive permissions than what was requested, but silently opening up the permissions is not ok, I think.
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12683664/HADOOP-11321.2.patch
          against trunk revision 78f7cdb.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5120//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5120//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12683664/HADOOP-11321.2.patch against trunk revision 78f7cdb. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5120//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5120//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm attaching a v2 patch. This refactors setPermissionIfAllowed into FileSystem. Subclasses can call this as needed. I think this is preferable over my earlier version with the method in FileUtil, which is annotated Public.

          -1 tests included. The patch doesn't appear to include any new or modified tests.

          I don't think we'll be able to add a reliable unit test that works across all dev environments. The only way I know to create this condition is to set up an SMB share and run as a user that doesn't have Full Control access. Not all environments will have the necessary rights to create an SMB share. I have tested this manually, with an SMB share mounted by both a Linux host and a Windows host.

          Colin P. McCabe, thank you for reviewing.

          It seems wrong to be asked to create a file with a certain permission, but then create it with a different permission and declare success.

          While I agree that it can be strange, it's interesting to note that there is existing precedent for handling it this way. This is the same behavior I see in both Linux and Cygwin operating on an SMB share. See below for a sample Cygwin session, where F: is mounted to an SMB share. I suppose this was a pragmatic choice for dealing with odd cases where a user may have rights to create a file but not control its permissions. This could include an SMB share denying WRITE_DAC or a file system using NFSv4 ACLs with a deny ACE on WRITE_ACL.

          cnauroth@WIN-NCDLEQLC13J /cygdrive/c/hdc
          $ mkdir -m 755 /cygdrive/f/dir2
          
          cnauroth@WIN-NCDLEQLC13J /cygdrive/c/hdc
          $ stat -c%a /cygdrive/f/dir2
          700
          

          Isn't there an API on Windows to set the permissions of the file we're creating?

          Unfortunately, this is not a matter of using a different syscall. The OS is denying the WRITE_DAC right, so access will be denied on attempts to set the discretionary access control list.

          Also, I might be misinterpreting, but this sounded like a suggestion to look for a possible solution in native Windows APIs. Just to clarify, this isn't a Windows-only bug. It's possible for a Linux host to mount an SMB share too. In that case, even if there was a native Windows API that could circumvent the WRITE_DAC denial, the solution wouldn't work for a Linux client.

          I haven't searched through all those shiny new JDK7 file APIs, surely it's there?

          I believe you're thinking of Files#createDirectory and Files#createFile, with the caller passing PosixFilePermissions as a file attribute. It turns out that doing this will fail fast on Windows with an unchecked exception stating that the file system doesn't implement POSIX permissions. The only reason the Hadoop code is able to do something like this is that we've written native code to translate POSIX permissions to a roughly equivalent discretionary access control list.

          If we absolutely, positively can't get this right, then we can have a config option to ignore the permission argument to local file creates... ugh.

          This is pretty standard Hadoop code review feedback. As a result, Hadoop now has 762 configuration properties. That's from a grep -c of core-default.xml, hdfs-default.xml, yarn-default.xml and mapred-default.xml, so the count doesn't include undocumented properties. I'm reluctant to keep pushing this higher. Would you consider the possibility that this is just a pragmatic choice for dealing with an edge case, based on the existing precedent shown elsewhere? Also, I've taken care that this patch only triggers this behavior for an EACCES, not general I/O errors or other problems.

          If you insist, then let me know, and I'll add a new configuration property.

          Show
          cnauroth Chris Nauroth added a comment - I'm attaching a v2 patch. This refactors setPermissionIfAllowed into FileSystem . Subclasses can call this as needed. I think this is preferable over my earlier version with the method in FileUtil , which is annotated Public . -1 tests included. The patch doesn't appear to include any new or modified tests. I don't think we'll be able to add a reliable unit test that works across all dev environments. The only way I know to create this condition is to set up an SMB share and run as a user that doesn't have Full Control access. Not all environments will have the necessary rights to create an SMB share. I have tested this manually, with an SMB share mounted by both a Linux host and a Windows host. Colin P. McCabe , thank you for reviewing. It seems wrong to be asked to create a file with a certain permission, but then create it with a different permission and declare success. While I agree that it can be strange, it's interesting to note that there is existing precedent for handling it this way. This is the same behavior I see in both Linux and Cygwin operating on an SMB share. See below for a sample Cygwin session, where F: is mounted to an SMB share. I suppose this was a pragmatic choice for dealing with odd cases where a user may have rights to create a file but not control its permissions. This could include an SMB share denying WRITE_DAC or a file system using NFSv4 ACLs with a deny ACE on WRITE_ACL . cnauroth@WIN-NCDLEQLC13J /cygdrive/c/hdc $ mkdir -m 755 /cygdrive/f/dir2 cnauroth@WIN-NCDLEQLC13J /cygdrive/c/hdc $ stat -c%a /cygdrive/f/dir2 700 Isn't there an API on Windows to set the permissions of the file we're creating? Unfortunately, this is not a matter of using a different syscall. The OS is denying the WRITE_DAC right, so access will be denied on attempts to set the discretionary access control list. Also, I might be misinterpreting, but this sounded like a suggestion to look for a possible solution in native Windows APIs. Just to clarify, this isn't a Windows-only bug. It's possible for a Linux host to mount an SMB share too. In that case, even if there was a native Windows API that could circumvent the WRITE_DAC denial, the solution wouldn't work for a Linux client. I haven't searched through all those shiny new JDK7 file APIs, surely it's there? I believe you're thinking of Files#createDirectory and Files#createFile , with the caller passing PosixFilePermissions as a file attribute. It turns out that doing this will fail fast on Windows with an unchecked exception stating that the file system doesn't implement POSIX permissions. The only reason the Hadoop code is able to do something like this is that we've written native code to translate POSIX permissions to a roughly equivalent discretionary access control list. If we absolutely, positively can't get this right, then we can have a config option to ignore the permission argument to local file creates... ugh. This is pretty standard Hadoop code review feedback. As a result, Hadoop now has 762 configuration properties. That's from a grep -c of core-default.xml, hdfs-default.xml, yarn-default.xml and mapred-default.xml, so the count doesn't include undocumented properties. I'm reluctant to keep pushing this higher. Would you consider the possibility that this is just a pragmatic choice for dealing with an edge case, based on the existing precedent shown elsewhere? Also, I've taken care that this patch only triggers this behavior for an EACCES , not general I/O errors or other problems. If you insist, then let me know, and I'll add a new configuration property.
          Hide
          cmccabe Colin P. McCabe added a comment -

          I don't like the patch as currently written. It seems wrong to be asked to create a file with a certain permission, but then create it with a different permission and declare success.

          Isn't there an API on Windows to set the permissions of the file we're creating? Usually you can set this at file creation time (which is really how we should be doing it anyway for efficiency reasons.) I haven't searched through all those shiny new JDK7 file APIs, surely it's there?

          If we absolutely, positively can't get this right, then we can have a config option to ignore the permission argument to local file creates... ugh.

          Show
          cmccabe Colin P. McCabe added a comment - I don't like the patch as currently written. It seems wrong to be asked to create a file with a certain permission, but then create it with a different permission and declare success. Isn't there an API on Windows to set the permissions of the file we're creating? Usually you can set this at file creation time (which is really how we should be doing it anyway for efficiency reasons.) I haven't searched through all those shiny new JDK7 file APIs, surely it's there? If we absolutely, positively can't get this right, then we can have a config option to ignore the permission argument to local file creates... ugh.
          Hide
          rusanu Remus Rusanu added a comment -

          LGTM

          Show
          rusanu Remus Rusanu added a comment - LGTM
          Hide
          hadoopqa Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12682536/HADOOP-11321.1.patch
          against trunk revision 72c141b.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5106//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5106//console

          This message is automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12682536/HADOOP-11321.1.patch against trunk revision 72c141b. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5106//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5106//console This message is automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm attaching a patch. The idea is to stop treating it as an error condition if an implicit setPermission fails due to access denied on a local file system. By "implicit", I mean that the setPermission was not called explicitly by an external caller, but implicitly as part of the caller's main operation, like a create. I'm not aware of any other local file systems where a user could be authorized to create a file, but then unauthorized to chmod that same file. The difference in behavior would have to be a known limitation when operating on SMB shares without Full Control permissions.

          Fixing this was a bit messy. If anyone has another idea, I'd like to hear it. I'd especially appreciate if I could get a review from one of the Windows experts, such as Chuan Liu, Ivan Mitic, Remus Rusanu or shanyu zhao.

          Show
          cnauroth Chris Nauroth added a comment - I'm attaching a patch. The idea is to stop treating it as an error condition if an implicit setPermission fails due to access denied on a local file system. By "implicit", I mean that the setPermission was not called explicitly by an external caller, but implicitly as part of the caller's main operation, like a create . I'm not aware of any other local file systems where a user could be authorized to create a file, but then unauthorized to chmod that same file. The difference in behavior would have to be a known limitation when operating on SMB shares without Full Control permissions. Fixing this was a bit messy. If anyone has another idea, I'd like to hear it. I'd especially appreciate if I could get a review from one of the Windows experts, such as Chuan Liu , Ivan Mitic , Remus Rusanu or shanyu zhao .
          Hide
          cnauroth Chris Nauroth added a comment -

          The access denied error occurs in libwinutils.c, in ChangeFileModeByMask, when it calls the Windows API SetFileSecurity with a security descriptor containing the new discretionary access control list. The SMB share denies the WRITE_DAC right. More details on those APIs are here:

          http://msdn.microsoft.com/en-us/library/windows/desktop/aa379577(v=vs.85).aspx

          http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx

          Show
          cnauroth Chris Nauroth added a comment - The access denied error occurs in libwinutils.c, in ChangeFileModeByMask , when it calls the Windows API SetFileSecurity with a security descriptor containing the new discretionary access control list. The SMB share denies the WRITE_DAC right. More details on those APIs are here: http://msdn.microsoft.com/en-us/library/windows/desktop/aa379577(v=vs.85).aspx http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx

            People

            • Assignee:
              cnauroth Chris Nauroth
              Reporter:
              cnauroth Chris Nauroth
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development