Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-12718

Incorrect error message by fs -put local dir without permission

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha2
    • Component/s: None
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Hide
      <!-- markdown -->

      The `hadoop fs -ls` command now prints "Permission denied" rather than "No such file or directory" when the user doesn't have permission to traverse the path.
      Show
      <!-- markdown --> The `hadoop fs -ls` command now prints "Permission denied" rather than "No such file or directory" when the user doesn't have permission to traverse the path.

      Description

      When the user doesn't have access permission to the local directory, the "hadoop fs -put" command prints a confusing error message "No such file or directory".

      $ whoami
      systest
      $ cd /home/systest
      $ ls -ld .
      drwx------. 4 systest systest 4096 Jan 13 14:21 .
      $ mkdir d1
      $ sudo -u hdfs hadoop fs -put d1 /tmp
      put: `d1': No such file or directory
      

      It will be more informative if the message is:

      put: d1 (Permission denied)
      

      If the source is a local file, the error message is ok:

      put: f1 (Permission denied)
      
      1. HADOOP-12718.008.patch
        5 kB
        John Zhuge
      2. HADOOP-12718.007.patch
        5 kB
        John Zhuge
      3. HADOOP-12718.006.patch
        4 kB
        John Zhuge
      4. HADOOP-12718.005.patch
        4 kB
        John Zhuge
      5. HADOOP-12718.004.patch
        1 kB
        Chris Nauroth
      6. HADOOP-12718.003.patch
        4 kB
        John Zhuge
      7. HADOOP-12718.002.patch
        2 kB
        John Zhuge
      8. TestFsShellCopyPermission-output.002.txt
        2 kB
        John Zhuge
      9. HADOOP-12718.001.patch
        2 kB
        John Zhuge
      10. TestFsShellCopyPermission-output.001.txt
        2 kB
        John Zhuge
      11. TestFsShellCopyPermission.001.patch
        4 kB
        John Zhuge

        Issue Links

          Activity

          Hide
          jzhuge John Zhuge added a comment -

          Manual local permission unit test source code and test output.

          Can't check in the unit test because it requires user to manually set up some local files and directories with different permissions in dir /tmp.

          Show
          jzhuge John Zhuge added a comment - Manual local permission unit test source code and test output. Can't check in the unit test because it requires user to manually set up some local files and directories with different permissions in dir /tmp .
          Hide
          jzhuge John Zhuge added a comment -

          Patch 001:

          • Print meaningful error message when there is no permission to read the local source directory in "fs -put".
          Show
          jzhuge John Zhuge added a comment - Patch 001: Print meaningful error message when there is no permission to read the local source directory in "fs -put".
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 41s trunk passed
          +1 compile 6m 27s trunk passed with JDK v1.8.0_66
          +1 compile 7m 2s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 20s trunk passed
          +1 mvnsite 1m 3s trunk passed
          +1 mvneclipse 0m 13s trunk passed
          +1 findbugs 1m 52s trunk passed
          +1 javadoc 0m 53s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 4s trunk passed with JDK v1.7.0_91
          +1 mvninstall 1m 38s the patch passed
          +1 compile 5m 55s the patch passed with JDK v1.8.0_66
          +1 javac 5m 55s the patch passed
          +1 compile 6m 57s the patch passed with JDK v1.7.0_91
          +1 javac 6m 57s the patch passed
          +1 checkstyle 0m 20s the patch passed
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 14s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 0s the patch passed
          +1 javadoc 0m 53s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 1s the patch passed with JDK v1.7.0_91
          +1 unit 6m 24s hadoop-common in the patch passed with JDK v1.8.0_66.
          -1 unit 6m 11s hadoop-common in the patch failed with JDK v1.7.0_91.
          +1 asflicense 0m 23s Patch does not generate ASF License warnings.
          60m 46s



          Reason Tests
          JDK v1.7.0_91 Failed junit tests hadoop.fs.TestSymlinkLocalFSFileContext



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784303/HADOOP-12718.001.patch
          JIRA Issue HADOOP-12718
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 17743e26888c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2085e60
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 77MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 41s trunk passed +1 compile 6m 27s trunk passed with JDK v1.8.0_66 +1 compile 7m 2s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 20s trunk passed +1 mvnsite 1m 3s trunk passed +1 mvneclipse 0m 13s trunk passed +1 findbugs 1m 52s trunk passed +1 javadoc 0m 53s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 4s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 38s the patch passed +1 compile 5m 55s the patch passed with JDK v1.8.0_66 +1 javac 5m 55s the patch passed +1 compile 6m 57s the patch passed with JDK v1.7.0_91 +1 javac 6m 57s the patch passed +1 checkstyle 0m 20s the patch passed +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 14s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 0s the patch passed +1 javadoc 0m 53s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 1s the patch passed with JDK v1.7.0_91 +1 unit 6m 24s hadoop-common in the patch passed with JDK v1.8.0_66. -1 unit 6m 11s hadoop-common in the patch failed with JDK v1.7.0_91. +1 asflicense 0m 23s Patch does not generate ASF License warnings. 60m 46s Reason Tests JDK v1.7.0_91 Failed junit tests hadoop.fs.TestSymlinkLocalFSFileContext Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784303/HADOOP-12718.001.patch JIRA Issue HADOOP-12718 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 17743e26888c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2085e60 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt unit test logs https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_91.txt JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 77MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8469/console This message was automatically generated.
          Hide
          jzhuge John Zhuge added a comment -

          To run the manual unit test:

          1. In /tmp, create 2 files f1 and f2 with permission 0600. f1 should be owned by another user.
          2. In /tmp", create 2 dirs *d1 and d2 with permission 0700. d1 should be owned by another user. Create some files in these 2 dirs.
          3. Apply the manual unit test patch, run the unit test TestFsShellCopyPermission.
          Show
          jzhuge John Zhuge added a comment - To run the manual unit test: In /tmp , create 2 files f1 and f2 with permission 0600 . f1 should be owned by another user. In /tmp", create 2 dirs *d1 and d2 with permission 0700 . d1 should be owned by another user. Create some files in these 2 dirs. Apply the manual unit test patch, run the unit test TestFsShellCopyPermission .
          Hide
          yzhangal Yongjun Zhang added a comment -

          HI John Zhuge,

          Thanks for working on this. One little thing:

          Currently the patch does:

          throw new AccessDeniedException("Permission denied to read " + f);

          On linux, I saw

          [systest@nightly-1 ~]$ ls -l /root/tempfile1
          ls: cannot access /root/tempfile1: Permission denied
          [systest@nightly-1 ~]$ ls -l /lost+found
          ls: cannot open directory /lost+found: Permission denied
          

          Can we try to make the output similar to linux?
          Would you please post your test result on a real cluster with this fix?

          Thanks.

          Show
          yzhangal Yongjun Zhang added a comment - HI John Zhuge , Thanks for working on this. One little thing: Currently the patch does: throw new AccessDeniedException("Permission denied to read " + f); On linux, I saw [systest@nightly-1 ~]$ ls -l /root/tempfile1 ls: cannot access /root/tempfile1: Permission denied [systest@nightly-1 ~]$ ls -l /lost+found ls: cannot open directory /lost+found: Permission denied Can we try to make the output similar to linux? Would you please post your test result on a real cluster with this fix? Thanks.
          Hide
          jzhuge John Zhuge added a comment -

          Thanks for the review, Yongjun Zhang. Will do.

          Show
          jzhuge John Zhuge added a comment - Thanks for the review, Yongjun Zhang . Will do.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 002:

          • Change error message
          • Upload manual unit test output

          Test the patch on a cdh5.7 cluster:

          $ ls -l
          total 12
          drwx------ 2 hdfs    systest 4096 Jan 26 10:29 d1
          drwx------ 2 systest systest 4096 Jan 26 10:29 d2
          -rw------- 1 hdfs    systest    0 Jan 26 10:29 f1
          -rw------- 1 systest systest    0 Jan 26 10:29 f2
          -rw-r--r-- 1 root    root     173 Jan 25 22:17 log4j.log
          $ hadoop fs -put f1 /tmp
          put: /home/systest/f1 (Permission denied)
          $ hadoop fs -put f2 /tmp
          $ hadoop fs -put d1 /tmp
          put: cannot open directory file:/home/systest/d1: Permission denied
          $ hadoop fs -put d2 /tmp
          
          Show
          jzhuge John Zhuge added a comment - Patch 002: Change error message Upload manual unit test output Test the patch on a cdh5.7 cluster: $ ls -l total 12 drwx------ 2 hdfs systest 4096 Jan 26 10:29 d1 drwx------ 2 systest systest 4096 Jan 26 10:29 d2 -rw------- 1 hdfs systest 0 Jan 26 10:29 f1 -rw------- 1 systest systest 0 Jan 26 10:29 f2 -rw-r--r-- 1 root root 173 Jan 25 22:17 log4j.log $ hadoop fs -put f1 /tmp put: /home/systest/f1 (Permission denied) $ hadoop fs -put f2 /tmp $ hadoop fs -put d1 /tmp put: cannot open directory file:/home/systest/d1: Permission denied $ hadoop fs -put d2 /tmp
          Hide
          yzhangal Yongjun Zhang added a comment -

          Thanks John Zhuge for the new rev. +1 pending jenkins.

          Show
          yzhangal Yongjun Zhang added a comment - Thanks John Zhuge for the new rev. +1 pending jenkins.
          Hide
          jzhuge John Zhuge added a comment -

          Please note this jira focuses on local dir, not local file. The error message for local file is not confusing, but different from local dir. If we want to ensure consistent error message for both, we'd have to modify a different code path for the local file.

          Show
          jzhuge John Zhuge added a comment - Please note this jira focuses on local dir, not local file. The error message for local file is not confusing, but different from local dir. If we want to ensure consistent error message for both, we'd have to modify a different code path for the local file.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Thanks John, yes, I'm aware of that difference based the output you posted. I think it's ok for this jira. If we really want to remove the parenthesis in the message for the local file code path, we can do it later.

          Show
          yzhangal Yongjun Zhang added a comment - Thanks John, yes, I'm aware of that difference based the output you posted. I think it's ok for this jira. If we really want to remove the parenthesis in the message for the local file code path, we can do it later.
          Hide
          jzhuge John Zhuge added a comment -

          Agree with you.

          John Zhuge

          Show
          jzhuge John Zhuge added a comment - Agree with you. John Zhuge
          Hide
          yzhangal Yongjun Zhang added a comment -

          HI John Zhuge,

          I saw
          https://builds.apache.org/job/PreCommit-HADOOP-Build/8471/console
          tries to apply

          Processing: HADOOP-12718
          HADOOP-12718 patch is being downloaded at Tue Jan 26 18:48:49 UTC 2016 from
          https://issues.apache.org/jira/secure/attachment/12784460/TestFsShellCopyPermission-output.002.txt
          ERROR: Unsure how to process HADOOP-12718.

          instead of your v2, because the last patch you submitted is TestFsShellCopyPermission-output.002.txt. Can you remove HADOOP-12718.002.patch and resubmit it so it will appear as the last file uploaded?

          Thanks.

          Show
          yzhangal Yongjun Zhang added a comment - HI John Zhuge , I saw https://builds.apache.org/job/PreCommit-HADOOP-Build/8471/console tries to apply Processing: HADOOP-12718 HADOOP-12718 patch is being downloaded at Tue Jan 26 18:48:49 UTC 2016 from https://issues.apache.org/jira/secure/attachment/12784460/TestFsShellCopyPermission-output.002.txt ERROR: Unsure how to process HADOOP-12718 . instead of your v2, because the last patch you submitted is TestFsShellCopyPermission-output.002.txt. Can you remove HADOOP-12718 .002.patch and resubmit it so it will appear as the last file uploaded? Thanks.
          Hide
          jzhuge John Zhuge added a comment -

          Re-upload 002 patch.

          Show
          jzhuge John Zhuge added a comment - Re-upload 002 patch.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 38s trunk passed
          +1 compile 5m 59s trunk passed with JDK v1.8.0_66
          +1 compile 6m 49s trunk passed with JDK v1.7.0_91
          +1 checkstyle 0m 22s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 49s trunk passed
          +1 javadoc 0m 50s trunk passed with JDK v1.8.0_66
          +1 javadoc 1m 1s trunk passed with JDK v1.7.0_91
          +1 mvninstall 1m 38s the patch passed
          +1 compile 6m 2s the patch passed with JDK v1.8.0_66
          +1 javac 6m 2s the patch passed
          +1 compile 6m 47s the patch passed with JDK v1.7.0_91
          +1 javac 6m 47s the patch passed
          +1 checkstyle 0m 20s the patch passed
          +1 mvnsite 1m 0s the patch passed
          +1 mvneclipse 0m 13s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 2m 2s the patch passed
          +1 javadoc 0m 53s the patch passed with JDK v1.8.0_66
          +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91
          +1 unit 6m 27s hadoop-common in the patch passed with JDK v1.8.0_66.
          +1 unit 6m 39s hadoop-common in the patch passed with JDK v1.7.0_91.
          +1 asflicense 0m 22s Patch does not generate ASF License warnings.
          60m 21s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:0ca8df7
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784541/HADOOP-12718.002.patch
          JIRA Issue HADOOP-12718
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 6060ec808e69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / d323639
          Default Java 1.7.0_91
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91
          findbugs v3.0.0
          JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8476/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Max memory used 76MB
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8476/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 38s trunk passed +1 compile 5m 59s trunk passed with JDK v1.8.0_66 +1 compile 6m 49s trunk passed with JDK v1.7.0_91 +1 checkstyle 0m 22s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 49s trunk passed +1 javadoc 0m 50s trunk passed with JDK v1.8.0_66 +1 javadoc 1m 1s trunk passed with JDK v1.7.0_91 +1 mvninstall 1m 38s the patch passed +1 compile 6m 2s the patch passed with JDK v1.8.0_66 +1 javac 6m 2s the patch passed +1 compile 6m 47s the patch passed with JDK v1.7.0_91 +1 javac 6m 47s the patch passed +1 checkstyle 0m 20s the patch passed +1 mvnsite 1m 0s the patch passed +1 mvneclipse 0m 13s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 2m 2s the patch passed +1 javadoc 0m 53s the patch passed with JDK v1.8.0_66 +1 javadoc 1m 3s the patch passed with JDK v1.7.0_91 +1 unit 6m 27s hadoop-common in the patch passed with JDK v1.8.0_66. +1 unit 6m 39s hadoop-common in the patch passed with JDK v1.7.0_91. +1 asflicense 0m 22s Patch does not generate ASF License warnings. 60m 21s Subsystem Report/Notes Docker Image:yetus/hadoop:0ca8df7 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12784541/HADOOP-12718.002.patch JIRA Issue HADOOP-12718 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 6060ec808e69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d323639 Default Java 1.7.0_91 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_66 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 findbugs v3.0.0 JDK v1.7.0_91 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/8476/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Max memory used 76MB Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/8476/console This message was automatically generated.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Committed to trunk, branch-2, and branch-2.8.

          Thanks John Zhuge for the contribution.

          Show
          yzhangal Yongjun Zhang added a comment - Committed to trunk, branch-2, and branch-2.8. Thanks John Zhuge for the contribution.
          Hide
          hudson Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #9191 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9191/)
          HADOOP-12718. Incorrect error message by fs -put local dir without (yzhang: rev 97056c3355810a803f07baca89b89e2bf6bb7201)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          hudson Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #9191 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9191/ ) HADOOP-12718 . Incorrect error message by fs -put local dir without (yzhang: rev 97056c3355810a803f07baca89b89e2bf6bb7201) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          jzhuge John Zhuge added a comment -

          Thanks a lot Yongjun Zhang.

          Show
          jzhuge John Zhuge added a comment - Thanks a lot Yongjun Zhang .
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm considering reverting this patch for a few reasons.

          First, the patch was not coded to handle Windows correctly. Changing it to use FileUtil#canRead would solve that.

          However, the larger issue is that this might be backwards-incompatible. Prior to the patch, lack of access would return null. After the patch, lack of access throws an exception. Although this matches HDFS semantics, applications often have different expectations of the local file system. If an application was coded to check for null, but not handle AccessDeniedException, then there is a risk of breaking that application.

          Cc Steve Loughran for a second opinion from the file system contract perspective. The current spec for listStatus does not explicitly require a pre-condition check that lack of access results in AccessDeniedException.

          http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/filesystem.html

          If we revert this, then something will have to be done about YARN-4842, which included tests that depended on the new exception message. Cc Xuan Gong.

          I won't revert immediately. I'll wait until next week so that others get time to consider and comment.

          Show
          cnauroth Chris Nauroth added a comment - I'm considering reverting this patch for a few reasons. First, the patch was not coded to handle Windows correctly. Changing it to use FileUtil#canRead would solve that. However, the larger issue is that this might be backwards-incompatible. Prior to the patch, lack of access would return null . After the patch, lack of access throws an exception. Although this matches HDFS semantics, applications often have different expectations of the local file system. If an application was coded to check for null , but not handle AccessDeniedException , then there is a risk of breaking that application. Cc Steve Loughran for a second opinion from the file system contract perspective. The current spec for listStatus does not explicitly require a pre-condition check that lack of access results in AccessDeniedException . http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/filesystem.html If we revert this, then something will have to be done about YARN-4842 , which included tests that depended on the new exception message. Cc Xuan Gong . I won't revert immediately. I'll wait until next week so that others get time to consider and comment.
          Hide
          jzhuge John Zhuge added a comment -

          Chris Naude Good point on that the patch alters FileSystem#listStatus interface to throw a new exception AccessDeniedException. If we'd keep the patch, we need to make sure all callers can handle the new exception.

          On the other hand, nothing indicates listStatus can return null and some callers do not test null before use. See examples below. IMHO, listStatus should return empty list instead of null. This probably needs a separate jira to either change RawLocalFileSystem#listStatus not to return null or fix the callers.

          AbstractContractGetFileStatusTest#testListStatusEmptyDirectory:

              assertEquals("ls on an empty directory not of length 0", 0,
                  fs.listStatus(subfolder).length);
          

          ChecksumFileSystem#copyToLocalFile:

                FileStatus[] srcs = listStatus(src);
                for (FileStatus srcFile : srcs) {
          

          SimpleCopyLIsting#getFileStatus:

                FileStatus[] fileStatuses = fileSystem.listStatus(path);
                if (excludeList != null && excludeList.size() > 0) {
                  ArrayList<FileStatus> fileStatusList = new ArrayList<>();
                  for(FileStatus status : fileStatuses) {
          
          Show
          jzhuge John Zhuge added a comment - Chris Naude Good point on that the patch alters FileSystem#listStatus interface to throw a new exception AccessDeniedException . If we'd keep the patch, we need to make sure all callers can handle the new exception. On the other hand, nothing indicates listStatus can return null and some callers do not test null before use. See examples below. IMHO, listStatus should return empty list instead of null . This probably needs a separate jira to either change RawLocalFileSystem#listStatus not to return null or fix the callers. AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: assertEquals( "ls on an empty directory not of length 0" , 0, fs.listStatus(subfolder).length); ChecksumFileSystem#copyToLocalFile: FileStatus[] srcs = listStatus(src); for (FileStatus srcFile : srcs) { SimpleCopyLIsting#getFileStatus: FileStatus[] fileStatuses = fileSystem.listStatus(path); if (excludeList != null && excludeList.size() > 0) { ArrayList<FileStatus> fileStatusList = new ArrayList<>(); for (FileStatus status : fileStatuses) {
          Hide
          jzhuge John Zhuge added a comment -

          Patch 003:

          • Use FileUtil#canRead instead of File#canRead
          • Update javadoc of FileSystem#listStatus
          • Update filesystem.md

          Fix the issues discovered by Chris Nauroth.

          Show
          jzhuge John Zhuge added a comment - Patch 003: Use FileUtil#canRead instead of File#canRead Update javadoc of FileSystem#listStatus Update filesystem.md Fix the issues discovered by Chris Nauroth .
          Hide
          stevel@apache.org Steve Loughran added a comment -
          1. There's a PathAccessDeniedException in Hadoop, not sure if its the one to use or not...my IDE isn't showing much (any?) use of that exception. Given it's obscurity, I'd stick with what the patch has: the java.nio one. (Looking at HADOOP-13130 I See that I'd just picked it up...I'll go to java.nio.AccessDeniedException there.

          I think to tighten the specification properly you're going to have follow the full process

          1. look at what HDFS does. write a test for this if needed
          2. Look at what the other filesystems do, such as azure, webHDFS. Ignore S3A, I'm setting that up right now.
          3. If any of the others return null, they'll need to be fixed.
          4. then we should consider having a permissions test. This is trickier than it looks, as permissions is where things vary, and setting them can be hard to do in a cross-platform, cross-FS way.

          Finally, I'd make the FS requirement, throws AccessDeniedException, IOException so that it is considered acceptable to raise other exceptions. This will avoid having to fix every other FS so that it is consistent with this tightening of the spec.

          Show
          stevel@apache.org Steve Loughran added a comment - There's a PathAccessDeniedException in Hadoop, not sure if its the one to use or not...my IDE isn't showing much (any?) use of that exception. Given it's obscurity, I'd stick with what the patch has: the java.nio one. (Looking at HADOOP-13130 I See that I'd just picked it up...I'll go to java.nio.AccessDeniedException there. I think to tighten the specification properly you're going to have follow the full process look at what HDFS does. write a test for this if needed Look at what the other filesystems do, such as azure, webHDFS. Ignore S3A, I'm setting that up right now. If any of the others return null, they'll need to be fixed. then we should consider having a permissions test. This is trickier than it looks, as permissions is where things vary, and setting them can be hard to do in a cross-platform, cross-FS way. Finally, I'd make the FS requirement, throws AccessDeniedException, IOException so that it is considered acceptable to raise other exceptions. This will avoid having to fix every other FS so that it is consistent with this tightening of the spec.
          Hide
          cnauroth Chris Nauroth added a comment -

          Steve Loughran, I agree with those suggestions for tightening up the spec if we keep this patch, but first I have a more fundamental question about whether or not we keep the patch at all.

          Users of the local file system might not always expect the semantics of the spec and contract tests (as mostly reverse engineered from HDFS). In some cases, the caller is seeking to emulate HDFS semantics, such as mini-cluster based tests. In other cases, such as LocalDirAllocator, a caller explicitly calls FileSystem#getLocal and expects to work with the semantics of the local file system. For example, I mentioned on HADOOP-9507/HADOOP-13082 that there had been a case in the past of Hive expecting very particular semantics from the local file system.

          Unfortunately, aside from full testing of the whole ecosystem, it's hard to know for sure that this change won't break something, because it's going to start throwing an exception that users of the local file system hadn't seen before. This is why I was inclined to revert, at least within the 2.x line. I'd appreciate your thoughts on whether or not this makes sense or I'm being overly paranoid. Thanks!

          Show
          cnauroth Chris Nauroth added a comment - Steve Loughran , I agree with those suggestions for tightening up the spec if we keep this patch, but first I have a more fundamental question about whether or not we keep the patch at all. Users of the local file system might not always expect the semantics of the spec and contract tests (as mostly reverse engineered from HDFS). In some cases, the caller is seeking to emulate HDFS semantics, such as mini-cluster based tests. In other cases, such as LocalDirAllocator , a caller explicitly calls FileSystem#getLocal and expects to work with the semantics of the local file system. For example, I mentioned on HADOOP-9507 / HADOOP-13082 that there had been a case in the past of Hive expecting very particular semantics from the local file system. Unfortunately, aside from full testing of the whole ecosystem, it's hard to know for sure that this change won't break something, because it's going to start throwing an exception that users of the local file system hadn't seen before. This is why I was inclined to revert, at least within the 2.x line. I'd appreciate your thoughts on whether or not this makes sense or I'm being overly paranoid. Thanks!
          Hide
          stevel@apache.org Steve Loughran added a comment -

          yeah, local is trouble. People assume that it just is their local FS.

          Show
          stevel@apache.org Steve Loughran added a comment - yeah, local is trouble. People assume that it just is their local FS.
          Hide
          jzhuge John Zhuge added a comment -

          I think the patch should use o.a.h.security.AccessControlException (many usages) instead of java.nio.AccessDeniedException.

          Show
          jzhuge John Zhuge added a comment - I think the patch should use o.a.h.security.AccessControlException (many usages) instead of java.nio.AccessDeniedException .
          Hide
          cnauroth Chris Nauroth added a comment -

          I'd like to proceed with this plan:

          1. Revert the patch previously committed from all branches.
          2. If you'd still like to pursue this patch, I'd agree to retargeting it to 3.x, where we can mark it backwards-incompatible.
          3. If continuing on trunk, then we'll fold in the Windows compatibility fix.
          4. Let's also firm up the spec and contract tests as Steve suggested. I think you could achieve common testing for HDFS and possibly the local file system. The test likely can't work in any meaningful way against S3A/WASB/etc., because they enforce authorization to a whole S3 bucket/Azure Storage account/etc. without any meaningful per-file authorization after that.

          I'll wait a few days for further comment before I start reverting.

          Show
          cnauroth Chris Nauroth added a comment - I'd like to proceed with this plan: Revert the patch previously committed from all branches. If you'd still like to pursue this patch, I'd agree to retargeting it to 3.x, where we can mark it backwards-incompatible. If continuing on trunk, then we'll fold in the Windows compatibility fix. Let's also firm up the spec and contract tests as Steve suggested. I think you could achieve common testing for HDFS and possibly the local file system. The test likely can't work in any meaningful way against S3A/WASB/etc., because they enforce authorization to a whole S3 bucket/Azure Storage account/etc. without any meaningful per-file authorization after that. I'll wait a few days for further comment before I start reverting.
          Hide
          jzhuge John Zhuge added a comment -

          The plan sounds good.

          I will use this jira to extend the contract with a new AccessControlException exception to indicate "permission denied".

          Created HADOOP-13191 to discuss whether null is a valid return for FileSystem#listStatus.

          Show
          jzhuge John Zhuge added a comment - The plan sounds good. I will use this jira to extend the contract with a new AccessControlException exception to indicate "permission denied". Created HADOOP-13191 to discuss whether null is a valid return for FileSystem#listStatus .
          Hide
          cnauroth Chris Nauroth added a comment -

          I'm going to reopen the issue now and attach a patch to do the revert. This is just the outcome of running git revert --no-commit 97056c3355810a803f07baca89b89e2bf6bb7201. It has been a while since this patch was committed though, so I'd like to take the revert through pre-commit as a precaution. I'm also going to temporarily raise this to blocker for 2.8.0, just so we don't forget to complete the revert before that release.

          Show
          cnauroth Chris Nauroth added a comment - I'm going to reopen the issue now and attach a patch to do the revert. This is just the outcome of running git revert --no-commit 97056c3355810a803f07baca89b89e2bf6bb7201 . It has been a while since this patch was committed though, so I'd like to take the revert through pre-commit as a precaution. I'm also going to temporarily raise this to blocker for 2.8.0, just so we don't forget to complete the revert before that release.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch
          JIRA Issue HADOOP-12718
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9661/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch JIRA Issue HADOOP-12718 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9661/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 docker 0m 4s Docker failed to build yetus/hadoop:2c91fd8.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch
          JIRA Issue HADOOP-12718
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9662/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 docker 0m 4s Docker failed to build yetus/hadoop:2c91fd8. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch JIRA Issue HADOOP-12718 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9662/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch
          JIRA Issue HADOOP-12718
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9670/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 docker 0m 5s Docker failed to build yetus/hadoop:2c91fd8. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch JIRA Issue HADOOP-12718 Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9670/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 mvninstall 7m 27s trunk passed
          +1 compile 7m 18s trunk passed
          +1 checkstyle 0m 25s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 12s trunk passed
          +1 findbugs 1m 26s trunk passed
          +1 javadoc 0m 47s trunk passed
          +1 mvninstall 0m 42s the patch passed
          +1 compile 6m 55s the patch passed
          +1 javac 6m 55s the patch passed
          +1 checkstyle 0m 24s the patch passed
          +1 mvnsite 0m 53s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 1s The patch has no whitespace issues.
          +1 findbugs 1m 30s the patch passed
          +1 javadoc 0m 46s the patch passed
          +1 unit 7m 54s hadoop-common in the patch passed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          39m 16s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:e2f6409
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch
          JIRA Issue HADOOP-12718
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 69908b0a79d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 0761379
          Default Java 1.8.0_91
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 mvninstall 7m 27s trunk passed +1 compile 7m 18s trunk passed +1 checkstyle 0m 25s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 12s trunk passed +1 findbugs 1m 26s trunk passed +1 javadoc 0m 47s trunk passed +1 mvninstall 0m 42s the patch passed +1 compile 6m 55s the patch passed +1 javac 6m 55s the patch passed +1 checkstyle 0m 24s the patch passed +1 mvnsite 0m 53s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. +1 findbugs 1m 30s the patch passed +1 javadoc 0m 46s the patch passed +1 unit 7m 54s hadoop-common in the patch passed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 39m 16s Subsystem Report/Notes Docker Image:yetus/hadoop:e2f6409 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch JIRA Issue HADOOP-12718 Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 69908b0a79d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 0761379 Default Java 1.8.0_91 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/9822/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          cnauroth Chris Nauroth added a comment -

          I have reverted the prior commits from trunk, branch-2 and branch-2.8. I have downgraded the priority of the issue and retargeted it to 3.0.0-alpha1.

          Show
          cnauroth Chris Nauroth added a comment - I have reverted the prior commits from trunk, branch-2 and branch-2.8. I have downgraded the priority of the issue and retargeted it to 3.0.0-alpha1.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Hadoop-trunk-Commit #9982 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9982/)
          Revert "HADOOP-12718. Incorrect error message by fs -put local dir (cnauroth: rev 5f6bc65bb31270f2b5dfdfd941a0568fc1f3337f)

          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Hadoop-trunk-Commit #9982 (See https://builds.apache.org/job/Hadoop-trunk-Commit/9982/ ) Revert " HADOOP-12718 . Incorrect error message by fs -put local dir (cnauroth: rev 5f6bc65bb31270f2b5dfdfd941a0568fc1f3337f) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
          Hide
          jzhuge John Zhuge added a comment -

          Patch 005:

          • Fix RawLocalFileSystem#listStatus to throw ACE when a dir exists but is not readable
          • Add unit test testPutSrcDirNoPerm and testPutSrcFileNoPerm to TestFsShellCopy

          Manual test output:

          $ ls -ld /tmp/d1
          drwx------  2 root  wheel  68 Oct 23 00:50 /tmp/d1
          $ hdfs dfs -put /tmp/d1 /tmp
          put: Permission denied for dir: /tmp/d1
          
          $ ls -ld /tmp/f1
          ----------  1 root  wheel  0 Oct 23 22:40 /tmp/f1
          $ hdfs dfs -put /tmp/f1 /tmp
          put: /tmp/f1 (Permission denied)
          
          Show
          jzhuge John Zhuge added a comment - Patch 005: Fix RawLocalFileSystem#listStatus to throw ACE when a dir exists but is not readable Add unit test testPutSrcDirNoPerm and testPutSrcFileNoPerm to TestFsShellCopy Manual test output: $ ls -ld /tmp/d1 drwx------ 2 root wheel 68 Oct 23 00:50 /tmp/d1 $ hdfs dfs -put /tmp/d1 /tmp put: Permission denied for dir: /tmp/d1 $ ls -ld /tmp/f1 ---------- 1 root wheel 0 Oct 23 22:40 /tmp/f1 $ hdfs dfs -put /tmp/f1 /tmp put: /tmp/f1 (Permission denied)
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 7m 2s trunk passed
          +1 compile 8m 40s trunk passed
          +1 checkstyle 0m 27s trunk passed
          +1 mvnsite 1m 7s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 35s trunk passed
          +1 javadoc 0m 46s trunk passed
          +1 mvninstall 0m 48s the patch passed
          +1 compile 8m 17s the patch passed
          +1 javac 8m 17s the patch passed
          -0 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 2 new + 57 unchanged - 0 fixed = 59 total (was 57)
          +1 mvnsite 1m 5s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 40s the patch passed
          +1 javadoc 0m 44s the patch passed
          +1 unit 8m 18s hadoop-common in the patch passed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          43m 24s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HADOOP-12718
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12834878/HADOOP-12718.005.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux c04873e4755f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / d0a3479
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 2s trunk passed +1 compile 8m 40s trunk passed +1 checkstyle 0m 27s trunk passed +1 mvnsite 1m 7s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 35s trunk passed +1 javadoc 0m 46s trunk passed +1 mvninstall 0m 48s the patch passed +1 compile 8m 17s the patch passed +1 javac 8m 17s the patch passed -0 checkstyle 0m 25s hadoop-common-project/hadoop-common: The patch generated 2 new + 57 unchanged - 0 fixed = 59 total (was 57) +1 mvnsite 1m 5s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 40s the patch passed +1 javadoc 0m 44s the patch passed +1 unit 8m 18s hadoop-common in the patch passed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 43m 24s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HADOOP-12718 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12834878/HADOOP-12718.005.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux c04873e4755f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / d0a3479 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10864/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          -1 as is. Afraid you're going to have to adopt Chris's recommendations of using FileUtil.canRead(). I think we could also take the opportunity to tighten the docs: "this is what you do if you can't read' a path".

          Ideally, I'd like some results on what happens on a windows test run, though that's going to depend on HADOOP-13586. Maybe now is a good time to fix that.

          We really need the full FS shell command suite pulled out into its own abstract suite and then implemented by the subclasses, just as any other contract. I'll leave that for another time.

          Show
          stevel@apache.org Steve Loughran added a comment - -1 as is. Afraid you're going to have to adopt Chris's recommendations of using FileUtil.canRead() . I think we could also take the opportunity to tighten the docs: "this is what you do if you can't read' a path". Ideally, I'd like some results on what happens on a windows test run, though that's going to depend on HADOOP-13586 . Maybe now is a good time to fix that. We really need the full FS shell command suite pulled out into its own abstract suite and then implemented by the subclasses, just as any other contract. I'll leave that for another time.
          Hide
          jzhuge John Zhuge added a comment -

          Steve Loughran, Thanks a lot for catching the mistake.

          Filed HADOOP-13751 Contract test for FS shell commands.

          What do you think of placing the permission check into FileUtil#list?

            public static String[] list(File dir) throws IOException {
              if (!canRead(dir)) {
                throw new AccessControlException("Permission denied for dir: " +
                    dir.toString());
              }
              String[] fileNames = dir.list();
              if(fileNames == null) {
                throw new IOException("Invalid directory or I/O error occurred for dir: "
                          + dir.toString());
              }
              return fileNames;
            }
          

          Currently FileUtil#list is only called by:

                  hadoop-common  (1 usage found)
                      org.apache.hadoop.fs  (1 usage found)
                          RawLocalFileSystem  (1 usage found)
                              listStatus(Path)  (1 usage found)
                                  474String[] names = FileUtil.list(localf);
                  hadoop-hdfs  (3 usages found)
                      org.apache.hadoop.hdfs.server.datanode  (2 usages found)
                          BlockPoolSliceStorage  (1 usage found)
                              cleanupDetachDir(File)  (1 usage found)
                                  518if (FileUtil.list(detachDir).length != 0) {
                          DataStorage  (1 usage found)
                              cleanupDetachDir(File)  (1 usage found)
                                  910if (FileUtil.list(detachDir).length != 0 ) {
                      org.apache.hadoop.hdfs.server.datanode.fsdataset.impl  (1 usage found)
                          FsVolumeImpl  (1 usage found)
                              isBPDirEmpty(String)  (1 usage found)
                                  1035if (rbwDir.exists() && FileUtil.list(rbwDir).length != 0) {
          

          They seem ok with the change but I am a little reluctant to widen the scope to modify a static utility function.

          Show
          jzhuge John Zhuge added a comment - Steve Loughran , Thanks a lot for catching the mistake. Filed HADOOP-13751 Contract test for FS shell commands. What do you think of placing the permission check into FileUtil#list ? public static String [] list(File dir) throws IOException { if (!canRead(dir)) { throw new AccessControlException( "Permission denied for dir: " + dir.toString()); } String [] fileNames = dir.list(); if (fileNames == null ) { throw new IOException( "Invalid directory or I/O error occurred for dir: " + dir.toString()); } return fileNames; } Currently FileUtil#list is only called by: hadoop-common (1 usage found) org.apache.hadoop.fs (1 usage found) RawLocalFileSystem (1 usage found) listStatus(Path) (1 usage found) 474String[] names = FileUtil.list(localf); hadoop-hdfs (3 usages found) org.apache.hadoop.hdfs.server.datanode (2 usages found) BlockPoolSliceStorage (1 usage found) cleanupDetachDir(File) (1 usage found) 518if (FileUtil.list(detachDir).length != 0) { DataStorage (1 usage found) cleanupDetachDir(File) (1 usage found) 910if (FileUtil.list(detachDir).length != 0 ) { org.apache.hadoop.hdfs.server.datanode.fsdataset.impl (1 usage found) FsVolumeImpl (1 usage found) isBPDirEmpty( String ) (1 usage found) 1035if (rbwDir.exists() && FileUtil.list(rbwDir).length != 0) { They seem ok with the change but I am a little reluctant to widen the scope to modify a static utility function.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          looked through that code too. why not post a query on to hdfs-dev to see what they thing.

          Looking at those uses, raising a new exception isn't going to make a difference —the read is going to fail anyway, isn't it? All that is happening is there's a tighter error message.

          my main concern there is that being windows, things will break if the native lib isn't on the path. Which going to happen in production anyway. I'm just curious what happens during testing

          Show
          stevel@apache.org Steve Loughran added a comment - looked through that code too. why not post a query on to hdfs-dev to see what they thing. Looking at those uses, raising a new exception isn't going to make a difference —the read is going to fail anyway, isn't it? All that is happening is there's a tighter error message. my main concern there is that being windows, things will break if the native lib isn't on the path. Which going to happen in production anyway. I'm just curious what happens during testing
          Hide
          jzhuge John Zhuge added a comment -

          I will post a question to hdfs-dev.

          Unfortunately I don't have access to Windows env. Will solicit community help upon the next patch.

          Show
          jzhuge John Zhuge added a comment - I will post a question to hdfs-dev. Unfortunately I don't have access to Windows env. Will solicit community help upon the next patch.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 006:

          • Move canRead() check and throws ACE from RawLocalFileSystem#listStatus to FileUtil#list which already throws IOE
          • All current callers of FileUtil#list can handle the change
          Show
          jzhuge John Zhuge added a comment - Patch 006: Move canRead() check and throws ACE from RawLocalFileSystem#listStatus to FileUtil#list which already throws IOE All current callers of FileUtil#list can handle the change
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 13s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 7m 30s trunk passed
          +1 compile 7m 28s trunk passed
          +1 checkstyle 0m 25s trunk passed
          +1 mvnsite 1m 2s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 34s trunk passed
          +1 javadoc 0m 46s trunk passed
          +1 mvninstall 0m 39s the patch passed
          +1 compile 7m 9s the patch passed
          +1 javac 7m 9s the patch passed
          +1 checkstyle 0m 23s the patch passed
          +1 mvnsite 0m 54s the patch passed
          +1 mvneclipse 0m 12s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 26s the patch passed
          +1 javadoc 0m 43s the patch passed
          +1 unit 8m 38s hadoop-common in the patch passed.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          41m 2s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HADOOP-12718
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12835448/HADOOP-12718.006.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux fd4f4068fdf3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 22ff0ef
          Default Java 1.8.0_101
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 13s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 30s trunk passed +1 compile 7m 28s trunk passed +1 checkstyle 0m 25s trunk passed +1 mvnsite 1m 2s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 34s trunk passed +1 javadoc 0m 46s trunk passed +1 mvninstall 0m 39s the patch passed +1 compile 7m 9s the patch passed +1 javac 7m 9s the patch passed +1 checkstyle 0m 23s the patch passed +1 mvnsite 0m 54s the patch passed +1 mvneclipse 0m 12s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 26s the patch passed +1 javadoc 0m 43s the patch passed +1 unit 8m 38s hadoop-common in the patch passed. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 41m 2s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HADOOP-12718 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12835448/HADOOP-12718.006.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux fd4f4068fdf3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 22ff0ef Default Java 1.8.0_101 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Thanks Chris Nauroth and Steve Loughran for finding the potential incompatibility of the earlier patch and the review/discussion, and thanks John Zhuge for the continued work here.

          I think patch rev 006 addressed the windows issue pointed out by Chris. One minor comment, we should change the javadoc description to indicate that ACE will be thrown when the given dir can't be read. I'm +1 after that's addressed. And this will be marked as an incompatible change.

          Hi Chris and Steve, do you have further comment?

          Thanks.

          Show
          yzhangal Yongjun Zhang added a comment - Thanks Chris Nauroth and Steve Loughran for finding the potential incompatibility of the earlier patch and the review/discussion, and thanks John Zhuge for the continued work here. I think patch rev 006 addressed the windows issue pointed out by Chris. One minor comment, we should change the javadoc description to indicate that ACE will be thrown when the given dir can't be read. I'm +1 after that's addressed. And this will be marked as an incompatible change. Hi Chris and Steve, do you have further comment? Thanks.
          Hide
          jzhuge John Zhuge added a comment -

          Patch 007:

          • Add javadoc for the new ACE to throw
          Show
          jzhuge John Zhuge added a comment - Patch 007: Add javadoc for the new ACE to throw
          Hide
          hadoopqa Hadoop QA added a comment -
          +1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 9m 1s trunk passed
          +1 compile 8m 38s trunk passed
          +1 checkstyle 0m 29s trunk passed
          +1 mvnsite 1m 9s trunk passed
          +1 mvneclipse 0m 14s trunk passed
          +1 findbugs 1m 44s trunk passed
          +1 javadoc 0m 51s trunk passed
          +1 mvninstall 0m 46s the patch passed
          +1 compile 8m 50s the patch passed
          +1 javac 8m 50s the patch passed
          +1 checkstyle 0m 27s the patch passed
          +1 mvnsite 1m 6s the patch passed
          +1 mvneclipse 0m 15s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 51s the patch passed
          +1 javadoc 0m 45s the patch passed
          +1 unit 9m 49s hadoop-common in the patch passed.
          +1 asflicense 0m 24s The patch does not generate ASF License warnings.
          48m 1s



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Issue HADOOP-12718
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837289/HADOOP-12718.007.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux bc496429f4ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / de01327
          Default Java 1.8.0_101
          findbugs v3.0.0
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - +1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 9m 1s trunk passed +1 compile 8m 38s trunk passed +1 checkstyle 0m 29s trunk passed +1 mvnsite 1m 9s trunk passed +1 mvneclipse 0m 14s trunk passed +1 findbugs 1m 44s trunk passed +1 javadoc 0m 51s trunk passed +1 mvninstall 0m 46s the patch passed +1 compile 8m 50s the patch passed +1 javac 8m 50s the patch passed +1 checkstyle 0m 27s the patch passed +1 mvnsite 1m 6s the patch passed +1 mvneclipse 0m 15s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 51s the patch passed +1 javadoc 0m 45s the patch passed +1 unit 9m 49s hadoop-common in the patch passed. +1 asflicense 0m 24s The patch does not generate ASF License warnings. 48m 1s Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Issue HADOOP-12718 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12837289/HADOOP-12718.007.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux bc496429f4ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / de01327 Default Java 1.8.0_101 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/10998/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Thanks John Zhuge for the new rev. +1.

          Hi Chris Nauroth or Steve Loughran, I will commit by tomorrow, would you please let me know if you have more comments? Best.

          Show
          yzhangal Yongjun Zhang added a comment - Thanks John Zhuge for the new rev. +1. Hi Chris Nauroth or Steve Loughran , I will commit by tomorrow, would you please let me know if you have more comments? Best.
          Hide
          stevel@apache.org Steve Loughran added a comment -

          Looking at this patch. I think I'd prefer it if the tests (a) looked for a constant string declared in FSExceptionMessages, rather than "permission denied". Tests look for strings are always so, so brittle. And we should be looking at a consistent error message for all those documentation and support call issues.
          I think the asserts could also include the full text

          
          

          private void assertPermissionDenied(String text){
          assertTrue(text + " does not contain " + FSExceptionMessages.NO_PERMISSION, text.contains(FSExceptionMessages.NO_PERMISSION)
          }

          One thing to consider —and I don't think there's a right or wrong here— is whether to make the exception an java.nio.file.AccessDeniedException, or a org.apache.hadoop.fs.PathPermissionException. Both of these separate out the path for analysis later.

          I've just checked S3AFileSystem; there we through the AccessDeniedException; I don't know if that's good —or should it switch to org.apache.hadoop.fs.PathPermissionException.

          That's really a separate issue; I'd go through the blobstores and make them consistent if we ever chose one exception & message.

          Show
          stevel@apache.org Steve Loughran added a comment - Looking at this patch. I think I'd prefer it if the tests (a) looked for a constant string declared in FSExceptionMessages , rather than "permission denied". Tests look for strings are always so, so brittle. And we should be looking at a consistent error message for all those documentation and support call issues. I think the asserts could also include the full text private void assertPermissionDenied(String text){ assertTrue(text + " does not contain " + FSExceptionMessages.NO_PERMISSION, text.contains(FSExceptionMessages.NO_PERMISSION) } One thing to consider —and I don't think there's a right or wrong here— is whether to make the exception an java.nio.file.AccessDeniedException , or a org.apache.hadoop.fs.PathPermissionException . Both of these separate out the path for analysis later. I've just checked S3AFileSystem; there we through the AccessDeniedException ; I don't know if that's good —or should it switch to org.apache.hadoop.fs.PathPermissionException . That's really a separate issue; I'd go through the blobstores and make them consistent if we ever chose one exception & message.
          Hide
          jzhuge John Zhuge added a comment -

          Javadoc for AccessDeniedException:

           * Checked exception thrown when a file system operation is denied, typically
           * due to a file permission or other access check.
           *
           * <p> This exception is not related to the {@link
           * java.security.AccessControlException AccessControlException} or {@link
           * SecurityException} thrown by access controllers or security managers when
           * access to a file is denied.
          

          AccessDeniedException seems to be a better choice than AccessControlException. ACE is widely used while ADE is only used in hadoop-aws and hadoop-yarn-common, probably because it was introduced in 1.7. PathPermissionException is only used 3 times in tests.

          Show
          jzhuge John Zhuge added a comment - Javadoc for AccessDeniedException : * Checked exception thrown when a file system operation is denied, typically * due to a file permission or other access check. * * <p> This exception is not related to the {@link * java.security.AccessControlException AccessControlException} or {@link * SecurityException} thrown by access controllers or security managers when * access to a file is denied. AccessDeniedException seems to be a better choice than AccessControlException . ACE is widely used while ADE is only used in hadoop-aws and hadoop-yarn-common, probably because it was introduced in 1.7. PathPermissionException is only used 3 times in tests.
          Hide
          jzhuge John Zhuge added a comment - - edited

          Patch 008:

          • Use AccessDeniedException instead of AccessControlException
          • Add constant FSExceptionMessages.PERMISSION_DENIED
          • Please note testPutSrcFileNoPerm should not use the constant because JDK FileInputStream#open raises FNFE exception containing "Permission denied" message.
          Show
          jzhuge John Zhuge added a comment - - edited Patch 008: Use AccessDeniedException instead of AccessControlException Add constant FSExceptionMessages.PERMISSION_DENIED Please note testPutSrcFileNoPerm should not use the constant because JDK FileInputStream#open raises FNFE exception containing "Permission denied" message.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 7m 18s trunk passed
          +1 compile 11m 37s trunk passed
          +1 checkstyle 0m 35s trunk passed
          +1 mvnsite 1m 7s trunk passed
          +1 mvneclipse 0m 23s trunk passed
          +1 findbugs 1m 30s trunk passed
          +1 javadoc 0m 57s trunk passed
          +1 mvninstall 0m 39s the patch passed
          +1 compile 10m 29s the patch passed
          +1 javac 10m 29s the patch passed
          +1 checkstyle 0m 34s the patch passed
          +1 mvnsite 1m 8s the patch passed
          +1 mvneclipse 0m 23s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 50s the patch passed
          +1 javadoc 0m 57s the patch passed
          -1 unit 8m 51s hadoop-common in the patch failed.
          +1 asflicense 0m 42s The patch does not generate ASF License warnings.
          51m 22s



          Reason Tests
          Failed junit tests hadoop.ha.TestZKFailoverController



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:e809691
          JIRA Issue HADOOP-12718
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12838105/HADOOP-12718.008.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 1a4bb458d8e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 62d8c17
          Default Java 1.8.0_101
          findbugs v3.0.0
          unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/testReport/
          modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
          Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 7m 18s trunk passed +1 compile 11m 37s trunk passed +1 checkstyle 0m 35s trunk passed +1 mvnsite 1m 7s trunk passed +1 mvneclipse 0m 23s trunk passed +1 findbugs 1m 30s trunk passed +1 javadoc 0m 57s trunk passed +1 mvninstall 0m 39s the patch passed +1 compile 10m 29s the patch passed +1 javac 10m 29s the patch passed +1 checkstyle 0m 34s the patch passed +1 mvnsite 1m 8s the patch passed +1 mvneclipse 0m 23s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 50s the patch passed +1 javadoc 0m 57s the patch passed -1 unit 8m 51s hadoop-common in the patch failed. +1 asflicense 0m 42s The patch does not generate ASF License warnings. 51m 22s Reason Tests Failed junit tests hadoop.ha.TestZKFailoverController Subsystem Report/Notes Docker Image:yetus/hadoop:e809691 JIRA Issue HADOOP-12718 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12838105/HADOOP-12718.008.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 1a4bb458d8e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 62d8c17 Default Java 1.8.0_101 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt Test Results https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/testReport/ modules C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common Console output https://builds.apache.org/job/PreCommit-HADOOP-Build/11040/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jzhuge John Zhuge added a comment -

          TestZKFailoverController failure is unrelated.

          Show
          jzhuge John Zhuge added a comment - TestZKFailoverController failure is unrelated.
          Hide
          yzhangal Yongjun Zhang added a comment -

          I saw quite a lot places refer to the string "Permission denied", which large mean the same thing referred to in the case here. We can replace the other places that refer to "Permission denied" with the constant defined in the patch here gradually when appropriate.

          About

          Please note test testPutSrcFileNoPerm should not use the constant

          I think we could actually use the constant in the above test now, but I'm ok to defer it when doing replacement in the source code that produce the message.

          Thanks.

          Show
          yzhangal Yongjun Zhang added a comment - I saw quite a lot places refer to the string "Permission denied", which large mean the same thing referred to in the case here. We can replace the other places that refer to "Permission denied" with the constant defined in the patch here gradually when appropriate. About Please note test testPutSrcFileNoPerm should not use the constant I think we could actually use the constant in the above test now, but I'm ok to defer it when doing replacement in the source code that produce the message. Thanks.
          Hide
          jzhuge John Zhuge added a comment - - edited

          In testPutSrcFileNoPerm, JDK FileInputStream#open raises the exception containing "Permission denied" message. That's why I didn't use the constant.

          Show
          jzhuge John Zhuge added a comment - - edited In testPutSrcFileNoPerm, JDK FileInputStream#open raises the exception containing "Permission denied" message. That's why I didn't use the constant.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Thanks John Zhuge, I thought the exception is from some of the places in hadoop code base that use hard coded string "Permission denied" (I saw quite a lot), if it's from JDK, we don't have control. So I'm +1 with the current patch.

          Since now we introduce the constant, it'd be nice to use it in hadoop code base, by replacing hardcoded strings to referring to the constant. But I think we don't need to do that in this jira.

          Any further comment Steve Loughran? If not, I will commit by tomorrow. Thanks.

          Show
          yzhangal Yongjun Zhang added a comment - Thanks John Zhuge , I thought the exception is from some of the places in hadoop code base that use hard coded string "Permission denied" (I saw quite a lot), if it's from JDK, we don't have control. So I'm +1 with the current patch. Since now we introduce the constant, it'd be nice to use it in hadoop code base, by replacing hardcoded strings to referring to the constant. But I think we don't need to do that in this jira. Any further comment Steve Loughran ? If not, I will commit by tomorrow. Thanks.
          Hide
          yzhangal Yongjun Zhang added a comment -

          Committed trunk and marked as incompatible change.

          Thanks John Zhuge for the patch and Steve Loughran for the review.

          Show
          yzhangal Yongjun Zhang added a comment - Committed trunk and marked as incompatible change. Thanks John Zhuge for the patch and Steve Loughran for the review.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10817 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10817/)
          HADOOP-12718. Incorrect error message by fs -put local dir without (yzhang: rev 470bdaa27a771467fcd44dfa9c9c951501642ac6)

          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
          • (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSExceptionMessages.java
          • (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10817 (See https://builds.apache.org/job/Hadoop-trunk-Commit/10817/ ) HADOOP-12718 . Incorrect error message by fs -put local dir without (yzhang: rev 470bdaa27a771467fcd44dfa9c9c951501642ac6) (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSExceptionMessages.java (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellCopy.java
          Hide
          jzhuge John Zhuge added a comment -

          Thanks Yongjun Zhang for the review and commit. Thanks Steve Loughran for the review.

          Show
          jzhuge John Zhuge added a comment - Thanks Yongjun Zhang for the review and commit. Thanks Steve Loughran for the review.

            People

            • Assignee:
              jzhuge John Zhuge
              Reporter:
              jzhuge John Zhuge
            • Votes:
              0 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development