Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-13005

HttpFs checks subdirectories ACL status when LISTSTATUS is used

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 2.7.3
    • None
    • httpfs
    • None

    Description

      HttpFs LISTSTATUS call fails if a subdirectory is using ACL because in org.apache.hadoop.fs.http.server.FSOperations.StatusPairs#StatusPairs, it gets the list of child objects and checks those ACL status one by one, rather than checking the target directory ACL.
      Would like to know if this is intentional.

            /*
             * For each FileStatus, attempt to acquire an AclStatus.  If the
             * getAclStatus throws an exception, we assume that ACLs are turned
             * off entirely and abandon the attempt.
             */
            boolean useAcls = true;   // Assume ACLs work until proven otherwise
            ...
      

      Reproduce steps:

      # NOTE: The test user "admin" has full access to /acltest
      [root@sandbox ~]# hdfs dfs -ls -R /acltest
      drwxrwx---+  - hdfs test          0 2018-01-09 08:44 /acltest/subdir
      -rwxrwx---   1 hdfs test        647 2018-01-09 08:44 /acltest/subdir/derby.log
      drwxr-xr-x   - hdfs test          0 2018-01-09 09:15 /acltest/subdir2
      [root@sandbox ~]# hdfs dfs -getfacl /acltest/subdir
      # file: /acltest/subdir
      # owner: hdfs
      # group: test
      user::rwx
      user:hdfs:rw-
      group::r-x
      mask::rwx
      other::---
      
      # WebHDFS works
      [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:50070/webhdfs/v1/acltest?op=LISTSTATUS"
      {"FileStatuses":{"FileStatus":[
      {"accessTime":0,"aclBit":true,"blockSize":0,"childrenNum":1,"fileId":79057,"group":"test","length":0,"modificationTime":1515487493078,"owner":"hdfs","pathSuffix":"subdir","permission":"770","replication":0,"storagePolicy":0,"type":"DIRECTORY"},
      {"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":79059,"group":"test","length":0,"modificationTime":1515489337849,"owner":"hdfs","pathSuffix":"subdir2","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}
      ]}}
      
      # Bat not via HttpFs
      [root@sandbox ~]# sudo -u admin curl --negotiate -u : "http://`hostname -f`:14000/webhdfs/v1/acltest?op=LISTSTATUS"
      {"RemoteException":{"message":"Permission denied: user=admin, access=EXECUTE, inode=\"\/acltest\/subdir\":hdfs:test:drwxrwx---","exception":"AccessControlException","javaClassName":"org.apache.hadoop.security.AccessControlException"}}
      
      # HDFS audit log
      [root@sandbox ~]# tail /var/log/hadoop/hdfs/hdfs-audit.log | grep -w admin
      2018-01-09 23:09:24,362 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:KERBEROS)       ip=/172.18.0.2  cmd=listStatus  src=/acltest    dst=null        perm=null       proto=webhdfs
      2018-01-09 23:09:31,937 INFO FSNamesystem.audit: allowed=true   ugi=admin (auth:PROXY) via httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=listStatus  src=/acltest    dst=null        perm=null       proto=rpc
      2018-01-09 23:09:31,978 INFO FSNamesystem.audit: allowed=false  ugi=admin (auth:PROXY) via httpfs/sandbox.hortonworks.com@EXAMPLE.COM (auth:KERBEROS)   ip=/172.18.0.2  cmd=getAclStatus        src=/acltest/subdir     dst=null        perm=null       proto=rpc
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            h_o Hajime Osako
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: