Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-6826

Plugin interface to enable delegation of HDFS authorization assertions

    Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.1
    • Fix Version/s: 2.7.0
    • Component/s: security
    • Labels:
      None
    • Target Version/s:

      Description

      When Hbase data, HiveMetaStore data or Search data is accessed via services (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce permissions on corresponding entities (databases, tables, views, columns, search collections, documents). It is desirable, when the data is accessed directly by users accessing the underlying data files (i.e. from a MapReduce job), that the permission of the data files map to the permissions of the corresponding data entity (i.e. table, column family or search collection).

      To enable this we need to have the necessary hooks in place in the NameNode to delegate authorization to an external system that can map HDFS files/directories to data entities and resolve their permissions based on the data entities permissions.

      I’ll be posting a design proposal in the next few days.

      1. HDFS-6826.16.patch
        55 kB
        Arun Suresh
      2. HDFS-6826.15.patch
        55 kB
        Arun Suresh
      3. HDFS-6826.14.patch
        57 kB
        Arun Suresh
      4. HDFS-6826.13.patch
        57 kB
        Arun Suresh
      5. HDFS-6826.12.patch
        55 kB
        Arun Suresh
      6. HDFS-6826.11.patch
        39 kB
        Arun Suresh
      7. HDFS-6826.10.patch
        47 kB
        Arun Suresh
      8. HDFS-6826v9.patch
        41 kB
        Alejandro Abdelnur
      9. HDFS-6826-permchecker.patch
        9 kB
        Daryn Sharp
      10. HDFS-6826v7.6.patch
        100 kB
        Alejandro Abdelnur
      11. HDFS-6826v7.5.patch
        43 kB
        Alejandro Abdelnur
      12. HDFS-6826v7.4.patch
        64 kB
        Alejandro Abdelnur
      13. HDFS-6826v7.3.patch
        68 kB
        Alejandro Abdelnur
      14. HDFS-6826v8.patch
        21 kB
        Alejandro Abdelnur
      15. HDFS-6826v7.2.patch
        67 kB
        Alejandro Abdelnur
      16. HDFS-6826v7.1.patch
        67 kB
        Alejandro Abdelnur
      17. HDFS-6826v7.patch
        67 kB
        Alejandro Abdelnur
      18. HDFS-6826v6.patch
        58 kB
        Alejandro Abdelnur
      19. HDFS-6826v5.patch
        52 kB
        Alejandro Abdelnur
      20. HDFS-6826v4.patch
        52 kB
        Alejandro Abdelnur
      21. HDFS-6826v3.patch
        50 kB
        Alejandro Abdelnur
      22. HDFSPluggableAuthorizationProposal-v2.pdf
        136 kB
        Alejandro Abdelnur
      23. HDFS-6826-idea2.patch
        57 kB
        Alejandro Abdelnur
      24. HDFS-6826-idea.patch
        41 kB
        Alejandro Abdelnur
      25. HDFSPluggableAuthorizationProposal.pdf
        136 kB
        Alejandro Abdelnur

        Issue Links

          Activity

          Hide
          Daryn Sharp added a comment -

          I'm curious why standard permissions + ACLs are insufficient? Having the NN access external services while checking permissions adds latency to call processing which ties up handler threads. This will reduce the throughput of "normal" hdfs operations. If the external service is slow, or worse down, the NN may become unresponsive when all the handlers are tied up.

          Show
          Daryn Sharp added a comment - I'm curious why standard permissions + ACLs are insufficient? Having the NN access external services while checking permissions adds latency to call processing which ties up handler threads. This will reduce the throughput of "normal" hdfs operations. If the external service is slow, or worse down, the NN may become unresponsive when all the handlers are tied up.
          Hide
          Alejandro Abdelnur added a comment -

          Uploading a proposal that includes usecases, current state of things, a proposal and how the proposal would enable a HiveMetaStore&Sentry driven authorizations for files belonging to tables.

          Show
          Alejandro Abdelnur added a comment - Uploading a proposal that includes usecases, current state of things, a proposal and how the proposal would enable a HiveMetaStore&Sentry driven authorizations for files belonging to tables.
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp, I believe the uploaded proposal will answer you questions.

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , I believe the uploaded proposal will answer you questions.
          Hide
          Daryn Sharp added a comment -

          I understand the motivation but there has to be a better approach. Isn't this akin to a nfs server or ext4 basing its permission model on a mysql query to access raw mysql files?

          Every external dependency introduces latency and additional HA concerns. Tying up handlers, whether or not the fsn lock is held, during an operation is very dangerous and unacceptable for the reasons I originally cited. Currently non-local edit logs, ex. shared nfs edit dir or journal node, are the only external dependency (I'm aware of). This critical dependency is unavoidable for durability and consistency.

          However, if an external service exposing data entities in hdfs uses a supplemental authz scheme, it should be its responsibility to arbitrate access if fs-level permissions are insufficient.

          Show
          Daryn Sharp added a comment - I understand the motivation but there has to be a better approach. Isn't this akin to a nfs server or ext4 basing its permission model on a mysql query to access raw mysql files? Every external dependency introduces latency and additional HA concerns. Tying up handlers, whether or not the fsn lock is held, during an operation is very dangerous and unacceptable for the reasons I originally cited. Currently non-local edit logs, ex. shared nfs edit dir or journal node, are the only external dependency (I'm aware of). This critical dependency is unavoidable for durability and consistency. However, if an external service exposing data entities in hdfs uses a supplemental authz scheme, it should be its responsibility to arbitrate access if fs-level permissions are insufficient.
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp,

          A custom plugin, would have a list of region prefixes that are subject to 'external' permissions, any path not matching these prefixes would go straight to the default plugin. Only path’s matching the region prefixes would b subject to an 'external' permissions check.

          Attached is an initial prototype, with a basic testcase using a custom plugin showing the proposed solution.

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , A custom plugin, would have a list of region prefixes that are subject to 'external' permissions, any path not matching these prefixes would go straight to the default plugin. Only path’s matching the region prefixes would b subject to an 'external' permissions check. Attached is an initial prototype, with a basic testcase using a custom plugin showing the proposed solution.
          Hide
          Daryn Sharp added a comment -

          It doesn't matter which or how many paths go through the custom plugin. Adding anything executed in a handler that can block, with or w/o the fsn lock, will put the entire NN in jeopardy.

          When it comes to problems with a slow external authz:

          1. Best-worst case is the special authz clients < ipc handlers. Authz clients suffocate the throughput of "normal" clients, DN heartbeats, and block reports but the NN limps along.
          2. Worst-worst case is the number of special authz clients >= ipc handlers. NN is effectively stalled. If the external authz service is down, and not just extremely slow, the latency from connection timeouts will cause the NN to go into an overloaded death spiral.

          I'll post an alternate approach that should require no client code changes shortly.

          Show
          Daryn Sharp added a comment - It doesn't matter which or how many paths go through the custom plugin. Adding anything executed in a handler that can block, with or w/o the fsn lock, will put the entire NN in jeopardy. When it comes to problems with a slow external authz: Best-worst case is the special authz clients < ipc handlers. Authz clients suffocate the throughput of "normal" clients, DN heartbeats, and block reports but the NN limps along. Worst-worst case is the number of special authz clients >= ipc handlers. NN is effectively stalled. If the external authz service is down, and not just extremely slow, the latency from connection timeouts will cause the NN to go into an overloaded death spiral. I'll post an alternate approach that should require no client code changes shortly.
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp, looking forward to your alternate approach.

          BTW, the plugin could be implemented in a way that the whole 'external' authorization data is fetched in advance, or that the external call has a maxCallTime and timesout returning default fallback strict permissions.

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , looking forward to your alternate approach. BTW, the plugin could be implemented in a way that the whole 'external' authorization data is fetched in advance, or that the external call has a maxCallTime and timesout returning default fallback strict permissions.
          Hide
          Daryn Sharp added a comment -

          The critical question is whether a NN backend should be used to solve external app-level authz issues. The NN is a filesystem. It has permissions and ACLs which should meet the authz requirements. Using external backend rapidly becomes complicated to do "correctly" in the respect of not impacting "real" NN operations.

          If permissions/ACLs aren't ideal, then how about a front-end authz manager? A MITM proxy service may be a cleaner approach that doesn't impact the NN. Using hive as an example, all hive files are owned and accessible only by the hive user. The hive server runs a rpc service implementing the NN's ClientProtocol. The service applies custom authz checks for hive files before allowing access. The main issue is likely file r/w access which is governed by block tokens. Block tokens are user-agnostic (don't get confused by the dead code for user checks) so the hive server can obtain block tokens usable by dfsclients. Hive tasks just access the hive files via hdfs://hive-nn/ instead of hdfs://nn/.

          Show
          Daryn Sharp added a comment - The critical question is whether a NN backend should be used to solve external app-level authz issues. The NN is a filesystem. It has permissions and ACLs which should meet the authz requirements. Using external backend rapidly becomes complicated to do "correctly" in the respect of not impacting "real" NN operations. If permissions/ACLs aren't ideal, then how about a front-end authz manager? A MITM proxy service may be a cleaner approach that doesn't impact the NN. Using hive as an example, all hive files are owned and accessible only by the hive user. The hive server runs a rpc service implementing the NN's ClientProtocol. The service applies custom authz checks for hive files before allowing access. The main issue is likely file r/w access which is governed by block tokens. Block tokens are user-agnostic (don't get confused by the dead code for user checks) so the hive server can obtain block tokens usable by dfsclients. Hive tasks just access the hive files via hdfs://hive-nn/ instead of hdfs://nn/.
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp,

          A proxy NN was one of the alternatives considered before settling for the Authz plugin.

          I like it because of the very same reasons you point out.

          The concerns with that approach are:

          • It is a new service
          • It has to support HA
          • It will be error prone for users the use of 2 NN URIs:
            • One for HDFS authz, another for Hive authz
            • A file or directory being added to a Hive table would first be referenced via the NN and then via the proxy NN
            • Users my add file URIs of the proxy NN in the HiveMetaStore

          Regarding the plugin approach. If the plugin does not interact with external systems during a filesystem RPC call and just uses inmemory data, that addresses your concerns, right?

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , A proxy NN was one of the alternatives considered before settling for the Authz plugin. I like it because of the very same reasons you point out. The concerns with that approach are: It is a new service It has to support HA It will be error prone for users the use of 2 NN URIs: One for HDFS authz, another for Hive authz A file or directory being added to a Hive table would first be referenced via the NN and then via the proxy NN Users my add file URIs of the proxy NN in the HiveMetaStore Regarding the plugin approach. If the plugin does not interact with external systems during a filesystem RPC call and just uses inmemory data, that addresses your concerns, right?
          Hide
          Charles Lamb added a comment -

          Isn’t the authz check this Jira proposes analogous to what the NN does today when it resolves user->groups mappings for authorization (LDAPGroupsMapping), which (I believe) is done in the fs RPC path.

          Show
          Charles Lamb added a comment - Isn’t the authz check this Jira proposes analogous to what the NN does today when it resolves user->groups mappings for authorization (LDAPGroupsMapping), which (I believe) is done in the fs RPC path.
          Hide
          Alejandro Abdelnur added a comment -

          Charles Lamb, that seems correct. We make sure this is not an issue under normal circumstances by implementing caching. The same would hold for any plugin implementation meant for production usage.

          Show
          Alejandro Abdelnur added a comment - Charles Lamb , that seems correct. We make sure this is not an issue under normal circumstances by implementing caching. The same would hold for any plugin implementation meant for production usage.
          Hide
          Daryn Sharp added a comment -

          Arg, yesterday's jira issues apparently caused my comment to be lost.

          The group mapping authz is a bit different. It's not context sensitive, as in a user uniformly belongs to groups across the whole namesystem. Path-based context sensitivity is adding hidden magic to a filesystem. How will the special magic be represented to the user confused by why the perms/ACLs aren't being honored? How will permission apis and FsShell interact with the magic?

          Instead of trying to hack special behavior for a specific use case into the NN, how about leveraging what's there. A cleaner way may be for a custom group mapping to fabricate groups something like "hive:table" or "hive:table:column". No code changes in the NN. Everything is contained in the custom groups mapping.

          I still think leveraging ACLs is the best way to go...

          Show
          Daryn Sharp added a comment - Arg, yesterday's jira issues apparently caused my comment to be lost. The group mapping authz is a bit different. It's not context sensitive, as in a user uniformly belongs to groups across the whole namesystem. Path-based context sensitivity is adding hidden magic to a filesystem. How will the special magic be represented to the user confused by why the perms/ACLs aren't being honored? How will permission apis and FsShell interact with the magic? Instead of trying to hack special behavior for a specific use case into the NN, how about leveraging what's there. A cleaner way may be for a custom group mapping to fabricate groups something like "hive:table" or "hive:table:column". No code changes in the NN. Everything is contained in the custom groups mapping. I still think leveraging ACLs is the best way to go...
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp,

          The group mapping authz is a bit different. It's not context sensitive, as in a user uniformly belongs to groups across the whole namesystem.

          Mmmhh, I’d argue that it is context sensitive, 'user' context, just a different context.

          Path-based context sensitivity is adding hidden magic to a filesystem. How will the special magic be represented to the user confused by why the perms/ACLs aren't being honored?

          The authorization enforcement semantics does not change at all. The plugin cannot change the permission check logic.

          The plugin is responsible for providing user/group/permissions/ACLs information to the NN who enforces the permissions consistently regardless of the plugin in use.

          How will permission apis and FsShell interact with the magic?

          The work as usual. Check the attached patch, the current HDFS user/group/permission/ACLs handling is done by a plugin implementation.

          Said that, a plugin implementation may decide to disable changes of user/group/permissions/ACLs. This can be done either silently or failing.

          Instead of trying to hack special behavior for a specific use case into the NN, how about leveraging what's there.

          The proposal doc describes in detail 3 different usecases: HiveMetaStore tables, Hbase tables, Solr search collections.

          A cleaner way may be for a custom group mapping to fabricate groups something like "hive:table" or "hive:table:column". No code changes in the NN. Everything is contained in the custom groups mapping.

          This does not solve the problem. When adding a directory as a HiveMetaStore table partition, unless you set those special groups explicitly, they would not be in the files being added to the table.

          It requires client side group manipulation and this is what breaks things.

          I still think leveraging ACLs is the best way to go...

          Actually, we are. In the case of HiveMetaStore, the plugin would expose GRANT permissions as ACLs.

          Daryn, I'm happy to jump on the phone if you want have a synchronous discussion.

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , The group mapping authz is a bit different. It's not context sensitive, as in a user uniformly belongs to groups across the whole namesystem. Mmmhh, I’d argue that it is context sensitive, 'user' context, just a different context. Path-based context sensitivity is adding hidden magic to a filesystem. How will the special magic be represented to the user confused by why the perms/ACLs aren't being honored? The authorization enforcement semantics does not change at all. The plugin cannot change the permission check logic. The plugin is responsible for providing user/group/permissions/ACLs information to the NN who enforces the permissions consistently regardless of the plugin in use. How will permission apis and FsShell interact with the magic? The work as usual. Check the attached patch, the current HDFS user/group/permission/ACLs handling is done by a plugin implementation. Said that, a plugin implementation may decide to disable changes of user/group/permissions/ACLs. This can be done either silently or failing. Instead of trying to hack special behavior for a specific use case into the NN, how about leveraging what's there. The proposal doc describes in detail 3 different usecases: HiveMetaStore tables, Hbase tables, Solr search collections. A cleaner way may be for a custom group mapping to fabricate groups something like "hive:table" or "hive:table:column". No code changes in the NN. Everything is contained in the custom groups mapping. This does not solve the problem. When adding a directory as a HiveMetaStore table partition, unless you set those special groups explicitly, they would not be in the files being added to the table. It requires client side group manipulation and this is what breaks things. I still think leveraging ACLs is the best way to go... Actually, we are. In the case of HiveMetaStore, the plugin would expose GRANT permissions as ACLs. Daryn, I'm happy to jump on the phone if you want have a synchronous discussion.
          Hide
          Daryn Sharp added a comment -

          Haha, synchronous discussion - that made my day. Yes, I'll contact you offline.

          Show
          Daryn Sharp added a comment - Haha, synchronous discussion - that made my day. Yes, I'll contact you offline.
          Hide
          Selvamohan Neethiraj added a comment -

          It just make sense to add an external authorization interface to NN and also provide a default implementation that would validate access based on the current NN access control implementation.
          This will provide flexibility for external modules to provide authorization based ABAC or RBAC for HDFS resources.

          However, the current proposal seems to expose some of the Inode related structure in the API call.

          So, I would like to propose an alternate method in the external HDFS authorizer with the following method signature:

          public void checkPermission(
          String requestedPath, FsAction requestedAction,
          String pathToTest, FsAction actionToTest, boolean isFinalPathOnRecursiveCheck,
          FsPermission permOnThePath, List<AclEntry> aclsOnThePath, String ownerUserName, String owningGroupName, boolean isDirectory) throws AccessControlException ;

          Where
          requestedPath - The path user is trying to access (e.g: /apps/data/finance/sample.txt )
          requestedAction - The action to be performed by user (e.g: READ)
          pathToTest - The path for which the access is being validated by recursive check (e.g: /apps)
          actionToTest - The action to be checked on “pathToTest” (e.g: READ_EXECUTE)
          isFinalPathOnRecursiveCheck - If this is the final path requested by the end-user (true when this is the final check by recursive checker)

          permOnThePath - RWX permission for the pathToTest
          aclsOnThePath - ACL available in the pathToTest
          ownerUserName - owner/username of the pathToTest
          owningGroupName - owning group of the pathToTest
          isDirectory - true if the pathToTest is a directory

          This would allow the native HDFS authorizer implementation to have all information needed to make decision based on the parameters and also, provides ability for external authorizer to make decision based on other attributes.

          Show
          Selvamohan Neethiraj added a comment - It just make sense to add an external authorization interface to NN and also provide a default implementation that would validate access based on the current NN access control implementation. This will provide flexibility for external modules to provide authorization based ABAC or RBAC for HDFS resources. However, the current proposal seems to expose some of the Inode related structure in the API call. So, I would like to propose an alternate method in the external HDFS authorizer with the following method signature: public void checkPermission( String requestedPath, FsAction requestedAction, String pathToTest, FsAction actionToTest, boolean isFinalPathOnRecursiveCheck, FsPermission permOnThePath, List<AclEntry> aclsOnThePath, String ownerUserName, String owningGroupName, boolean isDirectory) throws AccessControlException ; Where requestedPath - The path user is trying to access (e.g: /apps/data/finance/sample.txt ) requestedAction - The action to be performed by user (e.g: READ) pathToTest - The path for which the access is being validated by recursive check (e.g: /apps) actionToTest - The action to be checked on “pathToTest” (e.g: READ_EXECUTE) isFinalPathOnRecursiveCheck - If this is the final path requested by the end-user (true when this is the final check by recursive checker) permOnThePath - RWX permission for the pathToTest aclsOnThePath - ACL available in the pathToTest ownerUserName - owner/username of the pathToTest owningGroupName - owning group of the pathToTest isDirectory - true if the pathToTest is a directory This would allow the native HDFS authorizer implementation to have all information needed to make decision based on the parameters and also, provides ability for external authorizer to make decision based on other attributes.
          Hide
          Alejandro Abdelnur added a comment -

          Selvamohan Neethiraj,

          The current proposal (and POC code) only externalizes the source of truth for authorization information (user/group/permission/ACLs), it does not allow changing the behavior of checking permissions. IMO, doing this is safer than allowing a plugin to externalize the authorization assertion logic (which is not simple) and being exposed to unexpected behavior. In order words, with the current approach, the plugin only allows to change the data used to assert authorization, not how the authorization is asserted.

          Regarding exposing the INode, good point, we should create an interface with methods the plugin should see (INode woudl implement this new interface). Something like:

          
          

          public interface INodeAuthorizationInfo

          { public String getFullPath(); public void setUser(String user); public String getUser(int snapshot); public void setGroup(String group); public String getGroup(int snapshot); public void setPermission(long permission); public long getPermission(int snapshot); public void setAcls(List<AclEntry> acls); public List<AclEntry> getAcls(int snaphot); }

          Also, keep in mind that the plugin is not only used for authorization assertions, it also has to be used for producing the right authorization/ownership info back to the user via methods like getFileStatus() and getAcls(). Finally, a plugin could chose to implement changing authz info (user/group/permissions/acls) via the HDFS FS API (thing that is possible with the attached POC).

          Show
          Alejandro Abdelnur added a comment - Selvamohan Neethiraj , The current proposal (and POC code) only externalizes the source of truth for authorization information (user/group/permission/ACLs), it does not allow changing the behavior of checking permissions. IMO, doing this is safer than allowing a plugin to externalize the authorization assertion logic (which is not simple) and being exposed to unexpected behavior. In order words, with the current approach, the plugin only allows to change the data used to assert authorization, not how the authorization is asserted. Regarding exposing the INode, good point, we should create an interface with methods the plugin should see (INode woudl implement this new interface). Something like: public interface INodeAuthorizationInfo { public String getFullPath(); public void setUser(String user); public String getUser(int snapshot); public void setGroup(String group); public String getGroup(int snapshot); public void setPermission(long permission); public long getPermission(int snapshot); public void setAcls(List<AclEntry> acls); public List<AclEntry> getAcls(int snaphot); } Also, keep in mind that the plugin is not only used for authorization assertions, it also has to be used for producing the right authorization/ownership info back to the user via methods like getFileStatus() and getAcls(). Finally, a plugin could chose to implement changing authz info (user/group/permissions/acls) via the HDFS FS API (thing that is possible with the attached POC).
          Hide
          Jitendra Nath Pandey added a comment -

          I agree with the idea of plugin interface for authorization because many users want to customize the permission models to match their security requirements.
          We need to provide ability to plugin authorization info as well as permission checking semantics. I would suggest we should consider a union of the two proposals which allows set/get of auth info and also permission checks. If possible, the implementations should have an ability to fall back to default behavior if they don't want to override a particular API.

          Show
          Jitendra Nath Pandey added a comment - I agree with the idea of plugin interface for authorization because many users want to customize the permission models to match their security requirements. We need to provide ability to plugin authorization info as well as permission checking semantics. I would suggest we should consider a union of the two proposals which allows set/get of auth info and also permission checks. If possible, the implementations should have an ability to fall back to default behavior if they don't want to override a particular API.
          Hide
          Alejandro Abdelnur added a comment -

          Maybe a way of doing that would be to make an interface for the PermissionChecker methods and the the plugin to return an impl of it.

          I'll update the patch with those changes to see what people thinks.

          Show
          Alejandro Abdelnur added a comment - Maybe a way of doing that would be to make an interface for the PermissionChecker methods and the the plugin to return an impl of it. I'll update the patch with those changes to see what people thinks.
          Hide
          Alejandro Abdelnur added a comment - - edited

          Attached is new POC where the FsPermissionChecker has been made an interface, the original one renamed to DefaultFsPermissionChecker and the plugin creates a permission checker instance. (the patch is bigger because of the rename, when committing an svn move would be done to preserve history of the permission checker)

          Then the plugin can do both, replace authz info and replace the permission check logic.

          I've also remove the refresh() call from the plugin. this means that the plugin does not provide for means to make external call during a filesystem operation. A proper plugin impl should fetch all authz info async from fs operations.

          Show
          Alejandro Abdelnur added a comment - - edited Attached is new POC where the FsPermissionChecker has been made an interface, the original one renamed to DefaultFsPermissionChecker and the plugin creates a permission checker instance. (the patch is bigger because of the rename, when committing an svn move would be done to preserve history of the permission checker) Then the plugin can do both, replace authz info and replace the permission check logic. I've also remove the refresh() call from the plugin. this means that the plugin does not provide for means to make external call during a filesystem operation. A proper plugin impl should fetch all authz info async from fs operations.
          Hide
          Alejandro Abdelnur added a comment -

          updated proposal removing the refresh() method and adding the createPermissionChecker() method to the plugin interface.

          Show
          Alejandro Abdelnur added a comment - updated proposal removing the refresh() method and adding the createPermissionChecker() method to the plugin interface.
          Hide
          Jitendra Nath Pandey added a comment -

          In FsPermissionChecker interface, I don't think we should expose FSDirectory. We can attempt to either remove FSDirectory from FsPermissionChecker, or another choice could be to keep FsPermissionChecker class and let it internally use the plugin implementation for permission checks.

          Show
          Jitendra Nath Pandey added a comment - In FsPermissionChecker interface, I don't think we should expose FSDirectory. We can attempt to either remove FSDirectory from FsPermissionChecker, or another choice could be to keep FsPermissionChecker class and let it internally use the plugin implementation for permission checks.
          Hide
          Alejandro Abdelnur added a comment -

          Jitendra, attaching v3 patch, in this patch i've moving the checkpermission logic to the plugin and the FsPemissionChecker does delegate to the plugin. Still the FSDirectory is exposed in the API.

          Between v2 and v3 I prefer v2.

          Still I would argue we shouldn't allow replacing the permission checker logic, to ensure consistent check behavior. I don't have a use case for having a different permission check logic, do you? If nothing in sight at the moment, then we can table that till is needed.

          Thoughts?

          Show
          Alejandro Abdelnur added a comment - Jitendra, attaching v3 patch, in this patch i've moving the checkpermission logic to the plugin and the FsPemissionChecker does delegate to the plugin. Still the FSDirectory is exposed in the API. Between v2 and v3 I prefer v2. Still I would argue we shouldn't allow replacing the permission checker logic, to ensure consistent check behavior. I don't have a use case for having a different permission check logic, do you? If nothing in sight at the moment, then we can table that till is needed. Thoughts?
          Hide
          Selvamohan Neethiraj added a comment -

          Alejandro,

          The use case for externalizing the authorization: If a enterprise keeps their metadata details such as what is "confidential" in a separate system and provide access control based on the metadata, it is important to have a plug-able authorization module, which can use the metadata from external system and provide authorization to users/groups based on their own logic. I do not expect every organization to have a custom/plug-able authorization. But, this would allow security vendors and system integrators to expand security scope for hdfs.

          Show
          Selvamohan Neethiraj added a comment - Alejandro, The use case for externalizing the authorization: If a enterprise keeps their metadata details such as what is "confidential" in a separate system and provide access control based on the metadata, it is important to have a plug-able authorization module, which can use the metadata from external system and provide authorization to users/groups based on their own logic. I do not expect every organization to have a custom/plug-able authorization. But, this would allow security vendors and system integrators to expand security scope for hdfs.
          Hide
          Alejandro Abdelnur added a comment -

          Selvamohan Neethiraj, got it, it makes sense.

          Next week I would start working on the v2 patch to get it in proper shape. As a first cut I would prefer to make things pluggable without altering APIs. we can work on refining the APIs once we have the desired functionality. Also, something to keep in mind, this plugin API is meant to be used by somebody with very good understanding of the NameNode guts and expected behavior.

          Show
          Alejandro Abdelnur added a comment - Selvamohan Neethiraj , got it, it makes sense. Next week I would start working on the v2 patch to get it in proper shape. As a first cut I would prefer to make things pluggable without altering APIs. we can work on refining the APIs once we have the desired functionality. Also, something to keep in mind, this plugin API is meant to be used by somebody with very good understanding of the NameNode guts and expected behavior.
          Hide
          Jitendra Nath Pandey added a comment -

          Alejandro Abdelnur
          Thanks for quickly prototyping v2 and v3, that helps a lot to compare the two approaches.

          It seems we can do v3 easily without even exposing FSDirectory. Consider this API:

          public interface INodeAuthorizationInfoProvider {
          ....
             static class InodePermissionInfo {  
                  String path;
                  String owner;
                  String group;
                  FsPermission perm;  
                  boolean isDirectory;
                  List<AclEntry> acls;
              }
             
               // List<InodePermissionInfo> contains info about all the inodes in the path
               void checkPermission(List<InodePermissionInfo> inodePermInfos, FsAction requestedAccess) throws AccessControlException;
          ......
          }
          

          I vote for v3 because in v3 the use of the plugin for checks will be confined to FsPermissionChecker which can centrally extract all the information needed in InodePermissionInfo from FSDirectory and pass to the plugin. Also v3 exposes a single interface to implement which seems simpler and more coherent.

          Following snippet of code could suffice in FsPermissionChecker#checkPermission to delegate the permission check to the plugin. Note that we already have code to get all inodes for a path in an array from FSDirectory.

              List<InodePermissionInfo> inodePermInfos = new ArrayList<InodePermissionInfo>();
          
              INode [] inodeArray = .... // obtain from FSDirectory
              long snapshotId =  ... // obtain from FSDirectory 
             
              for (INode i : inodeArray) {
                inodePermInfos.add(new InodePermissionInfo(i.getFullPathName(), 
                    i.getUserName(snapshotId),
                    i.getGroupName(snapshotId), i.getFsPermission(snapshotId),
                     i.isDirectory(), i.getAclFeature().getEntries()));
              }
          
              plugin.checkPermission(inodePermInfos, access);
          

          If the above makes sense, I can help to provide a default implementation for the checkPermission API.
          I will try to prototype and run a few tests.

          Show
          Jitendra Nath Pandey added a comment - Alejandro Abdelnur Thanks for quickly prototyping v2 and v3, that helps a lot to compare the two approaches. It seems we can do v3 easily without even exposing FSDirectory. Consider this API: public interface INodeAuthorizationInfoProvider { .... static class InodePermissionInfo { String path; String owner; String group; FsPermission perm; boolean isDirectory; List<AclEntry> acls; } // List<InodePermissionInfo> contains info about all the inodes in the path void checkPermission(List<InodePermissionInfo> inodePermInfos, FsAction requestedAccess) throws AccessControlException; ...... } I vote for v3 because in v3 the use of the plugin for checks will be confined to FsPermissionChecker which can centrally extract all the information needed in InodePermissionInfo from FSDirectory and pass to the plugin. Also v3 exposes a single interface to implement which seems simpler and more coherent. Following snippet of code could suffice in FsPermissionChecker#checkPermission to delegate the permission check to the plugin. Note that we already have code to get all inodes for a path in an array from FSDirectory. List<InodePermissionInfo> inodePermInfos = new ArrayList<InodePermissionInfo>(); INode [] inodeArray = .... // obtain from FSDirectory long snapshotId = ... // obtain from FSDirectory for (INode i : inodeArray) { inodePermInfos.add( new InodePermissionInfo(i.getFullPathName(), i.getUserName(snapshotId), i.getGroupName(snapshotId), i.getFsPermission(snapshotId), i.isDirectory(), i.getAclFeature().getEntries())); } plugin.checkPermission(inodePermInfos, access); If the above makes sense, I can help to provide a default implementation for the checkPermission API. I will try to prototype and run a few tests.
          Hide
          Andrew Purtell added a comment -

          Looking at version 2 of the proposal document, one of the motivating use cases is:

          An organization has a multi­tenant HBase cluster. Users would like to use HBase's MapReduce­ over ­snapshot feature in order to improve their analytic performance. In order to do this today, such jobs have to run as the 'hbase' user, giving them blanket access to all HBase data. HDFS ACLs partially solve this but still result in complex management.

          Current versions of HBase can enforce ACLs at the namespace, table, column, and cell. Only the first three levels in that hierarchy correspond to whole files. HBase also can do filtering on a per cell basis given data sensitivity labeling in a manner similar in many respects to Accumulo. Agreed where security policy is not expressed and enforced at the cell level, this can improve overall management complexity, but there's not a solution here for integrating with HBase or Accumulo cell level features. Is that simply out of scope? Or potential future work? (Strawman: Perhaps delegation of authorization in HDFS can be done on a per block basis and those applications can sort cells into HDFS blocks such that authorization policy is uniform over all data within the HDFS block, but that implies unlikely and complex changes on multiple levels.)

          Show
          Andrew Purtell added a comment - Looking at version 2 of the proposal document, one of the motivating use cases is: An organization has a multi­tenant HBase cluster. Users would like to use HBase's MapReduce­ over ­snapshot feature in order to improve their analytic performance. In order to do this today, such jobs have to run as the 'hbase' user, giving them blanket access to all HBase data. HDFS ACLs partially solve this but still result in complex management. Current versions of HBase can enforce ACLs at the namespace, table, column, and cell. Only the first three levels in that hierarchy correspond to whole files. HBase also can do filtering on a per cell basis given data sensitivity labeling in a manner similar in many respects to Accumulo. Agreed where security policy is not expressed and enforced at the cell level, this can improve overall management complexity, but there's not a solution here for integrating with HBase or Accumulo cell level features. Is that simply out of scope? Or potential future work? (Strawman: Perhaps delegation of authorization in HDFS can be done on a per block basis and those applications can sort cells into HDFS blocks such that authorization policy is uniform over all data within the HDFS block, but that implies unlikely and complex changes on multiple levels.)
          Hide
          Alejandro Abdelnur added a comment -

          Andrew Purtell, cell level is out of scope from this proposal. This proposal focuses on providing 'synchronized' authorization between data entities and the associated files for the use cases where the files fully belong to a single data entity. If a file contains data for multiple data entities (Hbase cell, columns of a CSV file mapped to a HiveMetaStore table), it is not possible to map authorization to a file in a secure way (enforced by HDFS; you could enforce that a client lib level, but a modified client lib will give you access to the whole file).

          My take is that, in the case of authorization at cell level, this will always remain in HBase. Otherwise, we would require an authorization source with the scalability of HBase and with more performance than HBase.

          Show
          Alejandro Abdelnur added a comment - Andrew Purtell , cell level is out of scope from this proposal. This proposal focuses on providing 'synchronized' authorization between data entities and the associated files for the use cases where the files fully belong to a single data entity. If a file contains data for multiple data entities (Hbase cell, columns of a CSV file mapped to a HiveMetaStore table), it is not possible to map authorization to a file in a secure way (enforced by HDFS; you could enforce that a client lib level, but a modified client lib will give you access to the whole file). My take is that, in the case of authorization at cell level, this will always remain in HBase. Otherwise, we would require an authorization source with the scalability of HBase and with more performance than HBase.
          Hide
          Alejandro Abdelnur added a comment -

          New patch (v4) based on Selvamohan's & Jitendra's feebdback:

          • created an interface to only expose getters for authz info from INode
          • retrofitted INode classes to this interface
          • created a check in the authz interface that does not take an FSDirectory
          • did some renaming on proposed new classes/interfaces
          Show
          Alejandro Abdelnur added a comment - New patch (v4) based on Selvamohan's & Jitendra's feebdback: created an interface to only expose getters for authz info from INode retrofitted INode classes to this interface created a check in the authz interface that does not take an FSDirectory did some renaming on proposed new classes/interfaces
          Hide
          Alejandro Abdelnur added a comment -

          Forgot to mention, the AuthorizationProvider.INodeAuthorizationInfo.getPermissionLong() method does not take a snapshotId because it seems there is special handling for permissions and the snapshot-ed INode is queried. I'll take a look to see if it make sense to tweak this to pass a snapshotId.

          Show
          Alejandro Abdelnur added a comment - Forgot to mention, the AuthorizationProvider.INodeAuthorizationInfo.getPermissionLong() method does not take a snapshotId because it seems there is special handling for permissions and the snapshot-ed INode is queried. I'll take a look to see if it make sense to tweak this to pass a snapshotId.
          Hide
          Andrew Purtell added a comment -

          Thanks Alejandro Abdelnur, I agree with your points. Although delegated access to units of data smaller than a file is out of scope now but it's interesting and not hopeless as possible future work.

          Interesting because translation of fine grained authorization decisions from one layer of the stack, esp. between HDFS and things that sit on top, could be the cornerstone of unified authz in the stack.

          Not hopeless because research efforts like Horus illustrate how fine grained sub-file access for things like secure MapReduce over HBase HFiles can be built on a highly scalable KMS: https://www.usenix.org/conference/fast13/technical-sessions/presentation/li_yan .

          Show
          Andrew Purtell added a comment - Thanks Alejandro Abdelnur , I agree with your points. Although delegated access to units of data smaller than a file is out of scope now but it's interesting and not hopeless as possible future work. Interesting because translation of fine grained authorization decisions from one layer of the stack, esp. between HDFS and things that sit on top, could be the cornerstone of unified authz in the stack. Not hopeless because research efforts like Horus illustrate how fine grained sub-file access for things like secure MapReduce over HBase HFiles can be built on a highly scalable KMS: https://www.usenix.org/conference/fast13/technical-sessions/presentation/li_yan .
          Hide
          Jitendra Nath Pandey added a comment -

          Alejandro Abdelnur,
          I think we don't need to expose snapshotId. We can assume that this plugin will be invoked after the path has been resolved and INodeAuthorizationInfo is populated accordingly. The external implementations may not be able to track snapshotId, they should just use the full path.
          For example, in FsPermissionChecker, all we need from snapshotId is to find the right owner, group and permissions. If we pass the full path, along with INodeAuthorizationInfo populated with the owner, group and permissions for each intermediate inode, snapshotId is not needed.
          One possible way to implement this: Instead of having INode implement the INodeAuthorizationInfo interface, we could implement INodeAuthorizationInfo to encapsulate the snapshotId and the INode, and the method implementations would be just delegation to INode methods with snapshotId passed.

          >A few comments on checkPermission API

          • snapshotId is not needed in the params as argued above.
          • boolean resolveLink should not be needed, because all resolution should be already done before this interface is invoked, also the plugins cannot be expected to resolve the links.
          Show
          Jitendra Nath Pandey added a comment - Alejandro Abdelnur , I think we don't need to expose snapshotId. We can assume that this plugin will be invoked after the path has been resolved and INodeAuthorizationInfo is populated accordingly. The external implementations may not be able to track snapshotId, they should just use the full path. For example, in FsPermissionChecker, all we need from snapshotId is to find the right owner, group and permissions. If we pass the full path, along with INodeAuthorizationInfo populated with the owner, group and permissions for each intermediate inode, snapshotId is not needed. One possible way to implement this: Instead of having INode implement the INodeAuthorizationInfo interface, we could implement INodeAuthorizationInfo to encapsulate the snapshotId and the INode, and the method implementations would be just delegation to INode methods with snapshotId passed. >A few comments on checkPermission API snapshotId is not needed in the params as argued above. boolean resolveLink should not be needed, because all resolution should be already done before this interface is invoked, also the plugins cannot be expected to resolve the links.
          Hide
          Jing Zhao added a comment -

          I think we don't need to expose snapshotId. We can assume that this plugin will be invoked after the path has been resolved and INodeAuthorizationInfo is populated accordingly. The external implementations may not be able to track snapshotId, they should just use the full path.

          Yes, I agree with Jitendra Nath Pandey here. INodeAuthorizationInfo etc. only needs to handle information after path resolving thus snapshot Id should not be included in the API.

          we could implement INodeAuthorizationInfo to encapsulate the snapshotId and the INode

          Note that the class SnapshotCopy is a subclass of INodeAttributes. Thus I guess a simpler way here can be to let INodeAttributes implement INodeAuthorizationInfo. But one issue here is how to support getFullPath for snapshot inode, since it can be very expensive to get the full path for an inode that only exists in snapshots. It will make life easier if we can remove it from INodeAuthorizationInfo.

          Show
          Jing Zhao added a comment - I think we don't need to expose snapshotId. We can assume that this plugin will be invoked after the path has been resolved and INodeAuthorizationInfo is populated accordingly. The external implementations may not be able to track snapshotId, they should just use the full path. Yes, I agree with Jitendra Nath Pandey here. INodeAuthorizationInfo etc. only needs to handle information after path resolving thus snapshot Id should not be included in the API. we could implement INodeAuthorizationInfo to encapsulate the snapshotId and the INode Note that the class SnapshotCopy is a subclass of INodeAttributes. Thus I guess a simpler way here can be to let INodeAttributes implement INodeAuthorizationInfo. But one issue here is how to support getFullPath for snapshot inode, since it can be very expensive to get the full path for an inode that only exists in snapshots. It will make life easier if we can remove it from INodeAuthorizationInfo.
          Hide
          Alejandro Abdelnur added a comment -

          Jitendra Nath Pandey,

          uploading v5 patch. I've removed the resolveLink. I've added 2 methods to the INodeAuthorizationInfo interface getLocalName() and getParent.

          IMO, it is up to a plugin to support snapshots or not. The default impl (native HDFS) would, the current patch. An impl that does not care, can return current authz info for the node regardless of the provided snapshotId. Using the full path does not necessary work if supporting snapshots because of rename ops.

          Regarding wrapping INode with a INodeAuthorizationInfo impl before passing to the plugin, that would require creating a bunch of new objects (the wrappers). When crafting the API I've took special care on no creating additional objects in the authz methods. Thus, I would ike to keep it like that.

          Show
          Alejandro Abdelnur added a comment - Jitendra Nath Pandey , uploading v5 patch. I've removed the resolveLink. I've added 2 methods to the INodeAuthorizationInfo interface getLocalName() and getParent . IMO, it is up to a plugin to support snapshots or not. The default impl (native HDFS) would, the current patch. An impl that does not care, can return current authz info for the node regardless of the provided snapshotId. Using the full path does not necessary work if supporting snapshots because of rename ops. Regarding wrapping INode with a INodeAuthorizationInfo impl before passing to the plugin, that would require creating a bunch of new objects (the wrappers). When crafting the API I've took special care on no creating additional objects in the authz methods. Thus, I would ike to keep it like that.
          Hide
          Jitendra Nath Pandey added a comment -

          IMO, it is up to a plugin to support snapshots or not.

          I am not sure how a plugin will support snapshots. All it can do is to pass the snapshotId in the INodeAuthorizationInfo methods as an opaque number which it cannot interpret. Is there a use case where a plugin will deliberately want to ignore snapshotId or interpret it in someway? I am concerned that we are exposing an internal construct in a public interface which a plugin cannot meaningfully understand. Later it will be difficult to change the interface because it is a public API.

          Show
          Jitendra Nath Pandey added a comment - IMO, it is up to a plugin to support snapshots or not. I am not sure how a plugin will support snapshots. All it can do is to pass the snapshotId in the INodeAuthorizationInfo methods as an opaque number which it cannot interpret. Is there a use case where a plugin will deliberately want to ignore snapshotId or interpret it in someway? I am concerned that we are exposing an internal construct in a public interface which a plugin cannot meaningfully understand. Later it will be difficult to change the interface because it is a public API.
          Hide
          Alejandro Abdelnur added a comment -

          I've poked Andrew Wang earlier today to get some info on snapshots.

          My current patch is intercepting getPermissionLong(), it should intercept getFsPermission(), then is consistent with all other getter methods that takes snapshotId.

          The plugin should have 2 additional methods createSnapshot() and deleteSnapshot(), this would make the plugin participate in the snapshot lifecycle and allow the plugin to implement snapshot logic if desired (it doesn't have to do it).

          I'll update the patch to reflect these changes.

          Show
          Alejandro Abdelnur added a comment - I've poked Andrew Wang earlier today to get some info on snapshots. My current patch is intercepting getPermissionLong() , it should intercept getFsPermission() , then is consistent with all other getter methods that takes snapshotId. The plugin should have 2 additional methods createSnapshot() and deleteSnapshot() , this would make the plugin participate in the snapshot lifecycle and allow the plugin to implement snapshot logic if desired (it doesn't have to do it). I'll update the patch to reflect these changes.
          Hide
          Jing Zhao added a comment -

          The plugin should have 2 additional methods createSnapshot() and deleteSnapshot(), this would make the plugin participate in the snapshot lifecycle and allow the plugin to implement snapshot logic if desired (it doesn't have to do it).

          Can you provide more details about this? Currently in HDFS snapshots can only be created/deleted on snapshottable directories. I'm not sure how to let the authorization plugin participate in the snapshot lifecycle.

          Show
          Jing Zhao added a comment - The plugin should have 2 additional methods createSnapshot() and deleteSnapshot(), this would make the plugin participate in the snapshot lifecycle and allow the plugin to implement snapshot logic if desired (it doesn't have to do it). Can you provide more details about this? Currently in HDFS snapshots can only be created/deleted on snapshottable directories. I'm not sure how to let the authorization plugin participate in the snapshot lifecycle.
          Hide
          Alejandro Abdelnur added a comment -

          Jin ZHAO,

          Wiring the authz plugin to the snapshot lifecycle requires the following methods:

          • setSnaphostDirs(Map<INode, snapshotID>) // called at start up time, allowing the plugin to know existing snapshots
          • addSnapshotDir(INode)
          • removeSnapshotDir(INode)
          • createSnapshot(INode, snapshotID)
          • deleteSnapshot(INode, snapshotID)

          The authz plugin is given to the SnapshotManager at startup, and the SnapshotManager would call these methods in the corresponding snapshot operations.

          This will allow the authz plugin (if desired) to keep track of snapshots and to perform its own snapshot management for authz information.

          Show
          Alejandro Abdelnur added a comment - Jin ZHAO , Wiring the authz plugin to the snapshot lifecycle requires the following methods: setSnaphostDirs(Map<INode, snapshotID>) // called at start up time, allowing the plugin to know existing snapshots addSnapshotDir(INode) removeSnapshotDir(INode) createSnapshot(INode, snapshotID) deleteSnapshot(INode, snapshotID) The authz plugin is given to the SnapshotManager at startup, and the SnapshotManager would call these methods in the corresponding snapshot operations. This will allow the authz plugin (if desired) to keep track of snapshots and to perform its own snapshot management for authz information.
          Hide
          Andrew Wang added a comment -

          I don't think we need the addSnapshotDir and removeSnapshotDir hooks. Snapshots can't be created unless the dir is snapshottable, and you can't unmark a dir as snapshottable unless all the snapshots are deleted.

          One issue though, I'm not sure how the plugin could have useful permissions about an existing snapshot, i.e. a snapshot from the setSnapshotDirs() API.

          Show
          Andrew Wang added a comment - I don't think we need the addSnapshotDir and removeSnapshotDir hooks. Snapshots can't be created unless the dir is snapshottable, and you can't unmark a dir as snapshottable unless all the snapshots are deleted. One issue though, I'm not sure how the plugin could have useful permissions about an existing snapshot, i.e. a snapshot from the setSnapshotDirs() API.
          Hide
          Alejandro Abdelnur added a comment -

          Andrew Wang,

          By add/remove I've meant a directory becoming/ending to be snapshot-able.

          The setSnaphostDirs() method would be called by the SnapshotManager on startup. It would be called with values obtained from getSnapshottableDirs() to seed the authzplugin with the current snapshot-able dirs and their current snapshot info. This would allow the plugin, on first ever to initialize its snapshot logic and on subsequent restarts to verify/resync itself.

          Show
          Alejandro Abdelnur added a comment - Andrew Wang , By add/remove I've meant a directory becoming/ending to be snapshot-able. The setSnaphostDirs() method would be called by the SnapshotManager on startup. It would be called with values obtained from getSnapshottableDirs() to seed the authzplugin with the current snapshot-able dirs and their current snapshot info. This would allow the plugin, on first ever to initialize its snapshot logic and on subsequent restarts to verify/resync itself.
          Hide
          Alejandro Abdelnur added a comment -

          uploading v6 patch:

          • add snapshot lifecycle methods and wires them to the SnapshotManager
          • it fixes how permissions are handled to be consistent with the other methods (receiving snapshotId on get)
          • fixes setters of user/group which were not properly wired
          • some classes/params/vars renaming for consistency

          Trying to answer on how an external plugin would handle snapshot info:

          A plugin willing to handle snapshot information would have to have a storage with versioned authorization information.

          Leveraging the global monotonic characteristics of snapshot IDs. If the provided snapshotId is CURRENT_STATE_ID, the latest version of authorization information must be returned. If the provided snaphotId is not CURRENT_STATE_ID, then the latest-no-greater-than snapshotId version of the authorization information must be provided.

          The setSnapshotableDirs method is call as NN startup to inform the plugin of the current snapshotable directories and their latest snapshot. This allows a plugin to initialize its storage accordingly (in case it was just being setup) or resync if necessary.

          The create/remove snapshotableDir methods, allow the plugin to setup and cleanup any necessary information regarding a directory being snapshotable.

          The create/remove snapshot methods, allow the plugin to tag/version and cleanup any necessary information regarding a snapshot being created or removed.

          Show
          Alejandro Abdelnur added a comment - uploading v6 patch: add snapshot lifecycle methods and wires them to the SnapshotManager it fixes how permissions are handled to be consistent with the other methods (receiving snapshotId on get) fixes setters of user/group which were not properly wired some classes/params/vars renaming for consistency Trying to answer on how an external plugin would handle snapshot info: A plugin willing to handle snapshot information would have to have a storage with versioned authorization information. Leveraging the global monotonic characteristics of snapshot IDs. If the provided snapshotId is CURRENT_STATE_ID, the latest version of authorization information must be returned. If the provided snaphotId is not CURRENT_STATE_ID, then the latest-no-greater-than snapshotId version of the authorization information must be provided. The setSnapshotableDirs method is call as NN startup to inform the plugin of the current snapshotable directories and their latest snapshot. This allows a plugin to initialize its storage accordingly (in case it was just being setup) or resync if necessary. The create/remove snapshotableDir methods, allow the plugin to setup and cleanup any necessary information regarding a directory being snapshotable. The create/remove snapshot methods, allow the plugin to tag/version and cleanup any necessary information regarding a snapshot being created or removed.
          Hide
          Alejandro Abdelnur added a comment -

          v7, patch with javadocs and testcases.

          Show
          Alejandro Abdelnur added a comment - v7, patch with javadocs and testcases.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12663254/HDFS-6826v7.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.
          See https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 3 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots
          org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
          org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
          org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
          org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
          org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
          org.apache.hadoop.hdfs.server.namenode.TestFsLimits
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
          org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
          org.apache.hadoop.hdfs.server.namenode.TestStartup
          org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663254/HDFS-6826v7.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 3 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots org.apache.hadoop.hdfs.server.namenode.TestStorageRestore org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotManager org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap org.apache.hadoop.hdfs.server.namenode.TestEditLogRace org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots org.apache.hadoop.hdfs.server.namenode.TestFsLimits org.apache.hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog org.apache.hadoop.hdfs.server.namenode.TestStartup org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7709//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          uploaded patch taking care of the testcases failures (needed a bit of defensive coding as testcases are not always initializing everything), fixed javadoc warning.

          audit warnings are unrelated (the come from the fs-encryption merge and they are being addressed already).

          Show
          Alejandro Abdelnur added a comment - uploaded patch taking care of the testcases failures (needed a bit of defensive coding as testcases are not always initializing everything), fixed javadoc warning. audit warnings are unrelated (the come from the fs-encryption merge and they are being addressed already).
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12663594/HDFS-6826v7.1.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 3 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
          org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
          org.apache.hadoop.hdfs.server.namenode.TestStartup
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
          org.apache.hadoop.hdfs.TestHDFSServerPorts
          org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.TestFsLimits

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663594/HDFS-6826v7.1.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 3 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestEditLogRace org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings org.apache.hadoop.hdfs.server.namenode.TestStartup org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery org.apache.hadoop.hdfs.TestHDFSServerPorts org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestFsLimits +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7716//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          new patch cleaning up after testcase.

          Show
          Alejandro Abdelnur added a comment - new patch cleaning up after testcase.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12663679/HDFS-6826v7.2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
          org.apache.hadoop.hdfs.server.namenode.TestStartup
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
          org.apache.hadoop.hdfs.TestHDFSServerPorts
          org.apache.hadoop.security.TestRefreshUserMappings
          org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
          org.apache.hadoop.hdfs.server.namenode.TestFsLimits

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7719//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7719//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663679/HDFS-6826v7.2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestEditLogRace org.apache.hadoop.hdfs.server.namenode.TestStartup org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery org.apache.hadoop.hdfs.TestHDFSServerPorts org.apache.hadoop.security.TestRefreshUserMappings org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace org.apache.hadoop.hdfs.server.namenode.TestFsLimits +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7719//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7719//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          canceling patch, there is some special handling for editlogs that has to be done, working on it.

          Show
          Alejandro Abdelnur added a comment - canceling patch, there is some special handling for editlogs that has to be done, working on it.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12663679/HDFS-6826v7.2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestHDFSServerPorts
          org.apache.hadoop.hdfs.server.namenode.TestStartup
          org.apache.hadoop.hdfs.server.namenode.TestFsLimits
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
          org.apache.hadoop.hdfs.server.namenode.TestEditLogRace
          org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7722//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7722//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663679/HDFS-6826v7.2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestHDFSServerPorts org.apache.hadoop.hdfs.server.namenode.TestStartup org.apache.hadoop.hdfs.server.namenode.TestFsLimits org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings org.apache.hadoop.hdfs.server.namenode.TestEditLogRace org.apache.hadoop.hdfs.server.namenode.TestSaveNamespace org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7722//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7722//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          Sorry I disappeared, 2.x deployment issues are keeping me busy. I strongly object to the design. The concept is fine.

          Alejandro Abdelnur explained to me offline about how this is intended to work with hive. Users spray partition files all over the place. These files represent a logical entity with its own authz scheme. A directory could easily control the permissions but we're letting the tail wag the dog, a path sensitive plugin, to accommodate disorganization.

          The way I see it is the approach must maintain the semantics of a filesystem. Permissions are the authz for a filesystem. A plugin cannot impose magical/hidden authz on a file. Imagine the user/admin confusion that will arise. Instead we need to creatively leverage the existing semantics and allow additional authz restrictions that are still viewable. We have user, group, ACLs, and xattrs to work with.

          After internal discussion, the cleaner approach that maintains fs semantics is a plugin that may return additional fake & immutable (from the user's perspective) ACLs. Further, external path sensitivity is fragile. If the file or any parent directory is mutable, simply renaming the tree will effectively drop the faked authz. You should associate something with a path, like perhaps an immutable xattr that indicates the plugin should be queried for additional ACLs. You may even use the xattr value to indicate the file table for quick & easy lookups.

          Show
          Daryn Sharp added a comment - Sorry I disappeared, 2.x deployment issues are keeping me busy. I strongly object to the design. The concept is fine. Alejandro Abdelnur explained to me offline about how this is intended to work with hive. Users spray partition files all over the place. These files represent a logical entity with its own authz scheme. A directory could easily control the permissions but we're letting the tail wag the dog, a path sensitive plugin, to accommodate disorganization. The way I see it is the approach must maintain the semantics of a filesystem. Permissions are the authz for a filesystem. A plugin cannot impose magical/hidden authz on a file. Imagine the user/admin confusion that will arise. Instead we need to creatively leverage the existing semantics and allow additional authz restrictions that are still viewable. We have user, group, ACLs, and xattrs to work with. After internal discussion, the cleaner approach that maintains fs semantics is a plugin that may return additional fake & immutable (from the user's perspective) ACLs. Further, external path sensitivity is fragile. If the file or any parent directory is mutable, simply renaming the tree will effectively drop the faked authz. You should associate something with a path, like perhaps an immutable xattr that indicates the plugin should be queried for additional ACLs. You may even use the xattr value to indicate the file table for quick & easy lookups.
          Hide
          Alejandro Abdelnur added a comment -

          @daryn,

          Thanks for following up on this.

          I really like your suggestion of using 'additional’ immutable ACLs to expose external permissions. I’ve made a version of the patch to see how it would work (see attachment HDFS-6826v8.patch).

          But I still have a couple of issues I’m not sure how to address them with your proposed approach:

          Permissions remain settable in 2 places for 'table files', HDFS and the HiveMetaStore authorization source (i.e. Sentry). One of the motivations was to have a single source of permissions for 'table files’. I can constraint this to regular permissions (by having the plugin fully redacting the ACLs), but still I have 2 sources of authorization. Maybe being able to redact the regular permissions as well? (though you are advocating against that).

          It is not possible to do a hand over, with the previous proposals when a file was added to a table, the owner of the file would become hive:hive. If the original owner did not have GRANTs to the file, then s/he would not have access anymore. With your proposed approach, this is not possible as original owner remains.

          We also lose the capability of changing the 'permission check' logic Jitendra and Selvamohan wanted.

          Thoughts?

          Show
          Alejandro Abdelnur added a comment - @daryn, Thanks for following up on this. I really like your suggestion of using 'additional’ immutable ACLs to expose external permissions. I’ve made a version of the patch to see how it would work (see attachment HDFS-6826 v8.patch). But I still have a couple of issues I’m not sure how to address them with your proposed approach: Permissions remain settable in 2 places for 'table files' , HDFS and the HiveMetaStore authorization source (i.e. Sentry). One of the motivations was to have a single source of permissions for 'table files’. I can constraint this to regular permissions (by having the plugin fully redacting the ACLs), but still I have 2 sources of authorization. Maybe being able to redact the regular permissions as well? (though you are advocating against that). It is not possible to do a hand over , with the previous proposals when a file was added to a table, the owner of the file would become hive:hive. If the original owner did not have GRANTs to the file, then s/he would not have access anymore. With your proposed approach, this is not possible as original owner remains. We also lose the capability of changing the 'permission check' logic Jitendra and Selvamohan wanted. Thoughts?
          Hide
          Daryn Sharp added a comment -

          My position is hdfs authz (user/group/ACLs) should be authoritative in the namespace. My suggestion for ability to fake up an ACL was a minimally invasive compromise which I'm not fond of either. I'm not aware of another filesystem that allows this sort of behavior. Which brings up another point of how will these hbase/hive features work with s3 or any other filesystem?

          Stepping back, let's look at the hive case. Basically it sounds like hive needs to take ownership of files and allow efficient grant updates. Let's say the NN allowed limited chown capabilities. When a partition is added to a hive table, the hive server moves the file into a directory representing the table and chowns the file to itself. Now ACLs on the table directory are used to control access to the files. Changing a grant involves updating the table dir's ACLs, not updating the n-many partitions representing the table. Would this work?

          Show
          Daryn Sharp added a comment - My position is hdfs authz (user/group/ACLs) should be authoritative in the namespace. My suggestion for ability to fake up an ACL was a minimally invasive compromise which I'm not fond of either. I'm not aware of another filesystem that allows this sort of behavior. Which brings up another point of how will these hbase/hive features work with s3 or any other filesystem? Stepping back, let's look at the hive case. Basically it sounds like hive needs to take ownership of files and allow efficient grant updates. Let's say the NN allowed limited chown capabilities. When a partition is added to a hive table, the hive server moves the file into a directory representing the table and chowns the file to itself. Now ACLs on the table directory are used to control access to the files. Changing a grant involves updating the table dir's ACLs, not updating the n-many partitions representing the table. Would this work?
          Hide
          Alejandro Abdelnur added a comment -

          @daryn,

          When a partition is added to a hive table, the hive server moves the file into a directory representing the table and chowns the file to itself.

          To do the chown it would require the HiveMetaStore to be a hdfs superuser. AFAIK, ACLs are not inherited at permission check time. They are inherited at creation time if the parent has DEFAULT ACLs. So, the chown would have to be done recursively. And, if somebody adds a file to a partition dir later on, it would not have the right settings.

          And still this is not solving my 'single source of authz info' as I could still set things in HDFS or in Sentry.

          I'm trying to come up with a model that enables authz to be delegated if the file/dir belongs to a table.

          Please let me know if you want to do another synchronous discussion round.

          Show
          Alejandro Abdelnur added a comment - @daryn, When a partition is added to a hive table, the hive server moves the file into a directory representing the table and chowns the file to itself. To do the chown it would require the HiveMetaStore to be a hdfs superuser. AFAIK, ACLs are not inherited at permission check time. They are inherited at creation time if the parent has DEFAULT ACLs. So, the chown would have to be done recursively. And, if somebody adds a file to a partition dir later on, it would not have the right settings. And still this is not solving my 'single source of authz info' as I could still set things in HDFS or in Sentry. I'm trying to come up with a model that enables authz to be delegated if the file/dir belongs to a table. Please let me know if you want to do another synchronous discussion round.
          Hide
          Daryn Sharp added a comment -

          To do the chown it would require the HiveMetaStore to be a hdfs superuser

          Sorry I wasn't clear but that wasn't my intent. By "limited chown capabilities" I meant modifying chown to be usable by a non-superuser with appropriate restrictions. Think of it as a sudoers thing.

          Basically, /hive/table1 represents the table and is owned by hive:somegroup with rwxr-x--- . The group and/or ACLs embody the table grants. Changing grants only requires modifying the table1 directory itself, not its contents.

          /hive/table1/part1..part20 are owned by hive:hive with perms r-rr- . The user/group/ACLs don't really matter because the directory is controlling access to the table's partitions. Now when a user wants to add a partition, it's moved to /hive/table1 and chown'ed to owner hive with r-rr-.

          Show
          Daryn Sharp added a comment - To do the chown it would require the HiveMetaStore to be a hdfs superuser Sorry I wasn't clear but that wasn't my intent. By "limited chown capabilities" I meant modifying chown to be usable by a non-superuser with appropriate restrictions. Think of it as a sudoers thing. Basically, /hive/table1 represents the table and is owned by hive:somegroup with rwxr-x--- . The group and/or ACLs embody the table grants. Changing grants only requires modifying the table1 directory itself, not its contents. /hive/table1/part1..part20 are owned by hive:hive with perms r- r r - . The user/group/ACLs don't really matter because the directory is controlling access to the table's partitions. Now when a user wants to add a partition, it's moved to /hive/table1 and chown'ed to owner hive with r- r r -.
          Hide
          Alejandro Abdelnur added a comment -

          Updated version of the v7 patch (v7.3), the problem was the FSNamesystem#stopCommonServices() is sometimes called without a previous call to startCommonServices(), thus a NPE was happening in the non-initialized authz provider. I've guarded with an if-not-null the provider stop() and all test-patch failing tests are now passing.

          Show
          Alejandro Abdelnur added a comment - Updated version of the v7 patch (v7.3), the problem was the FSNamesystem#stopCommonServices() is sometimes called without a previous call to startCommonServices() , thus a NPE was happening in the non-initialized authz provider. I've guarded with an if-not-null the provider stop() and all test-patch failing tests are now passing.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664319/HDFS-6826v7.3.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7764//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7764//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664319/HDFS-6826v7.3.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7764//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7764//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          Test failure pass locally, scanning the test output does not shield anything related to this patch.

          Uploading new v7 patch with some refactoring, making the authz provider and abstract class with singleton pattern access.

          Show
          Alejandro Abdelnur added a comment - Test failure pass locally, scanning the test output does not shield anything related to this patch. Uploading new v7 patch with some refactoring, making the authz provider and abstract class with singleton pattern access.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664424/HDFS-6826v7.4.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7770//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7770//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664424/HDFS-6826v7.4.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7770//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7770//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          v7.5 version, as per Daryn's (offline) feedback undoing all FsPermissionChecker changes and adding a call to the plugin during permission check.

          I have 2 additional action items:

          1. see if special handling is required during edit log replay with v7.
          2. toy with a v9 where interception is done at permission checker and filestatus creation level.

          Show
          Alejandro Abdelnur added a comment - v7.5 version, as per Daryn's (offline) feedback undoing all FsPermissionChecker changes and adding a call to the plugin during permission check. I have 2 additional action items: 1. see if special handling is required during edit log replay with v7. 2. toy with a v9 where interception is done at permission checker and filestatus creation level.
          Hide
          Jitendra Nath Pandey added a comment -

          In v7.5, in FSPermissionChecker#checkPermission, if the plugin is configured, checkTraverse and rest of the existing code would still be invoked. A plugin should have an ability to exclusively perform the permission checks.

          Show
          Jitendra Nath Pandey added a comment - In v7.5, in FSPermissionChecker#checkPermission, if the plugin is configured, checkTraverse and rest of the existing code would still be invoked. A plugin should have an ability to exclusively perform the permission checks.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664778/HDFS-6826v7.5.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7797//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7797//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664778/HDFS-6826v7.5.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7797//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7797//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          Jitendra Nath Pandey, then v7.4 is what you would prefer? From yesterday's call I think Daryn and I misunderstood then that the requirement was to be able do add an additional assertion in the authorization path, not to fully replace it.

          Show
          Alejandro Abdelnur added a comment - Jitendra Nath Pandey , then v7.4 is what you would prefer? From yesterday's call I think Daryn and I misunderstood then that the requirement was to be able do add an additional assertion in the authorization path, not to fully replace it.
          Hide
          Jitendra Nath Pandey added a comment -

          I do prefer v7.4 approach. I believe plugins should have the flexibility to exclusively control the permission check.

          Show
          Jitendra Nath Pandey added a comment - I do prefer v7.4 approach. I believe plugins should have the flexibility to exclusively control the permission check.
          Hide
          Alejandro Abdelnur added a comment -

          Uploading v7.6, which is based on v7.4 (based on Jitendra's preference).

          This patch adds the isClientOp() method to the provider which indicates if the current invocation is done as part of a client FS operation or as part of a NameNode operation (fs image save, fs edit log, etc). I've had to create an interceptor for the ClientProtocol.

          The testcases do positive and negative testing for this.

          This addresses my action item #1 from my discussion with @daryn.

          Show
          Alejandro Abdelnur added a comment - Uploading v7.6, which is based on v7.4 (based on Jitendra's preference). This patch adds the isClientOp() method to the provider which indicates if the current invocation is done as part of a client FS operation or as part of a NameNode operation (fs image save, fs edit log, etc). I've had to create an interceptor for the ClientProtocol . The testcases do positive and negative testing for this. This addresses my action item #1 from my discussion with @daryn.
          Hide
          Alejandro Abdelnur added a comment -

          Regarding my action item #2, based on @daryn’s suggestion, I’ve move all data authz interception to the FsPermissionChecker, something like:

          Adding the following methods:

            private String getUserName(INode[] nodes) {
              return AuthorizationProvider.get().getUser(nodes);
            }
          
            private String getGroupName(INode[] nodes) {
              // wee need to recreate INode[] using the snapshot version of the nodes, here or before calling
              return AuthorizationProvider.get().getGroup(nodes);
            }
          
            private FsPermission getFsPermission(INode[] nodes) {
              return AuthorizationProvider.get().getFsPermission(nodes);
            }
            
            private AclFeature getAclFeature(INode[] nodes) {
              return AuthorizationProvider.get().getAclFeature(nodes);
            }
          

          And then replacing all the calls user/group/permissions/acls getters of INodes calls within FsPermissionChecker to use the above methods, ie:

          The intention was to be able to reuse the already calculated INode[] chain.

          The issues I’m running with this are:

          • The INode[] chain calculated on the entry point of checkPermission does not take into account snapshots, thus we will need to recalculate the INode[] to use the right snapshot for the full chain.
          • All the logic in FSPermissionChecker has to be redone to pass the INode[] chain around. This will get tricky as in many places direct array access like this 'checkStickyBit(inodes[inodes.length - 2], last, snapshotId);' are being done. And this is not always done on the snapshot version of the INode.
          • The INode[] may have null elements, complicating things on the plugin side.
          • We’ll have to do the same in the FSDirectory to create file status.

          Another issue is that doing this, the plugin is only intercepting getter calls, not setter calls.

          Overall, it seems the plugin will have to be more complex than with the v7 approach, it will have less functionality (no writes, no snapshots), and it will require some serious rewriting of the FsPermissionChecker.

          Show
          Alejandro Abdelnur added a comment - Regarding my action item #2, based on @daryn’s suggestion, I’ve move all data authz interception to the FsPermissionChecker , something like: Adding the following methods: private String getUserName(INode[] nodes) { return AuthorizationProvider.get().getUser(nodes); } private String getGroupName(INode[] nodes) { // wee need to recreate INode[] using the snapshot version of the nodes, here or before calling return AuthorizationProvider.get().getGroup(nodes); } private FsPermission getFsPermission(INode[] nodes) { return AuthorizationProvider.get().getFsPermission(nodes); } private AclFeature getAclFeature(INode[] nodes) { return AuthorizationProvider.get().getAclFeature(nodes); } And then replacing all the calls user/group/permissions/acls getters of INodes calls within FsPermissionChecker to use the above methods, ie: The intention was to be able to reuse the already calculated INode[] chain. The issues I’m running with this are: The INode[] chain calculated on the entry point of checkPermission does not take into account snapshots, thus we will need to recalculate the INode[] to use the right snapshot for the full chain. All the logic in FSPermissionChecker has to be redone to pass the INode[] chain around. This will get tricky as in many places direct array access like this 'checkStickyBit(inodes [inodes.length - 2] , last, snapshotId);' are being done. And this is not always done on the snapshot version of the INode . The INode[] may have null elements, complicating things on the plugin side. We’ll have to do the same in the FSDirectory to create file status. Another issue is that doing this, the plugin is only intercepting getter calls, not setter calls. Overall, it seems the plugin will have to be more complex than with the v7 approach, it will have less functionality (no writes, no snapshots), and it will require some serious rewriting of the FsPermissionChecker .
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665073/HDFS-6826v7.6.patch
          against trunk revision c4c9a78.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 2 warning messages.
          See https://builds.apache.org/job/PreCommit-HDFS-Build/7837//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.security.TestRefreshUserMappings

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7837//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7837//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665073/HDFS-6826v7.6.patch against trunk revision c4c9a78. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 2 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/7837//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.security.TestRefreshUserMappings +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7837//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7837//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665073/HDFS-6826v7.6.patch
          against trunk revision d9a7404.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 2 warning messages.
          See https://builds.apache.org/job/PreCommit-HDFS-Build/7841//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite
          org.apache.hadoop.hdfs.TestAppendDifferentChecksum
          org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat
          org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
          org.apache.hadoop.fs.TestSymlinkHdfsFileContext
          org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics
          org.apache.hadoop.hdfs.TestDFSMkdirs
          org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
          org.apache.hadoop.fs.TestGlobPaths
          org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
          org.apache.hadoop.fs.TestHDFSFileContextMainOperations
          org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
          org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
          org.apache.hadoop.hdfs.TestDFSRename
          org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA
          org.apache.hadoop.hdfs.web.TestWebHDFS
          org.apache.hadoop.hdfs.server.namenode.TestAddBlock
          org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
          org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
          org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
          org.apache.hadoop.hdfs.server.namenode.TestBackupNode
          org.apache.hadoop.hdfs.TestDFSUpgrade
          org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
          org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
          org.apache.hadoop.hdfs.TestParallelUnixDomainRead
          org.apache.hadoop.fs.viewfs.TestViewFsHdfs
          org.apache.hadoop.hdfs.web.TestHttpsFileSystem
          org.apache.hadoop.fs.TestResolveHdfsSymlink
          org.apache.hadoop.hdfs.TestFileAppend3
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
          org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
          org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
          org.apache.hadoop.cli.TestAclCLI
          org.apache.hadoop.hdfs.server.namenode.TestStartup
          org.apache.hadoop.fs.shell.TestHdfsTextCommand
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
          org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
          org.apache.hadoop.hdfs.server.datanode.TestDiskError
          org.apache.hadoop.hdfs.TestBlockReaderLocal
          org.apache.hadoop.hdfs.TestMultiThreadedHflush
          org.apache.hadoop.fs.TestFcHdfsPermission
          org.apache.hadoop.hdfs.web.TestWebHdfsTokens
          org.apache.hadoop.hdfs.TestConnCache
          org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
          org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
          org.apache.hadoop.hdfs.server.datanode.TestCachingStrategy
          org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
          org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
          org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
          org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA
          org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations
          org.apache.hadoop.fs.TestFcHdfsSetUMask
          org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs
          org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
          org.apache.hadoop.hdfs.TestLargeBlock
          org.apache.hadoop.hdfs.TestRenameWhileOpen
          org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
          org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool
          org.apache.hadoop.hdfs.TestModTime
          org.apache.hadoop.hdfs.server.namenode.TestFileContextAcl
          org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
          org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol
          org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
          org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
          org.apache.hadoop.hdfs.TestListFilesInFileContext
          org.apache.hadoop.hdfs.TestDFSRemove
          org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
          org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
          org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold
          org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
          org.apache.hadoop.fs.permission.TestStickyBit
          org.apache.hadoop.hdfs.TestFileAppend2
          org.apache.hadoop.hdfs.TestBlocksScheduledCounter
          org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
          org.apache.hadoop.cli.TestHDFSCLI
          org.apache.hadoop.hdfs.server.namenode.TestSequentialBlockId
          org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade
          org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
          org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
          org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration
          org.apache.hadoop.hdfs.TestDecommission
          org.apache.hadoop.hdfs.server.datanode.TestHSync
          org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
          org.apache.hadoop.hdfs.TestWriteRead
          org.apache.hadoop.hdfs.server.namenode.TestINodeFile
          org.apache.hadoop.hdfs.TestEncryptionZones
          org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
          org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
          org.apache.hadoop.hdfs.TestAbandonBlock
          org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
          org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
          org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache
          org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
          org.apache.hadoop.cli.TestCryptoAdminCLI
          org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
          org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr
          org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
          org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart
          org.apache.hadoop.fs.TestUrlStreamHandler
          org.apache.hadoop.hdfs.TestMissingBlocksAlert
          org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
          org.apache.hadoop.fs.TestEnhancedByteBufferAccess
          org.apache.hadoop.hdfs.web.TestWebHDFSForHA
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.TestMetaSave
          org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
          org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
          org.apache.hadoop.hdfs.TestSmallBlock
          org.apache.hadoop.hdfs.TestReservedRawPaths
          org.apache.hadoop.hdfs.TestDFSRollback
          org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
          org.apache.hadoop.cli.TestXAttrCLI
          org.apache.hadoop.hdfs.server.namenode.TestProcessCorruptBlocks
          org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
          org.apache.hadoop.hdfs.web.TestWebHDFSAcl
          org.apache.hadoop.hdfs.TestLocalDFS
          org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
          org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd
          org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement
          org.apache.hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps
          org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled
          org.apache.hadoop.hdfs.TestCrcCorruption
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
          org.apache.hadoop.hdfs.TestDatanodeDeath
          org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode
          org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages
          org.apache.hadoop.hdfs.TestRestartDFS
          org.apache.hadoop.hdfs.TestFileAppend4
          org.apache.hadoop.hdfs.TestFileStatus
          org.apache.hadoop.hdfs.server.namenode.TestFileLimit
          org.apache.hadoop.fs.TestFcHdfsCreateMkdir
          org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes
          org.apache.hadoop.tools.TestJMXGet
          org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
          org.apache.hadoop.hdfs.TestDFSClientFailover
          org.apache.hadoop.hdfs.TestQuota
          org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
          org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
          org.apache.hadoop.hdfs.server.namenode.TestFSImage
          org.apache.hadoop.fs.TestSymlinkHdfsDisable
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
          org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA
          org.apache.hadoop.hdfs.TestClientBlockVerification

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7841//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7841//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665073/HDFS-6826v7.6.patch against trunk revision d9a7404. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 2 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/7841//artifact/trunk/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.TestParallelImageWrite org.apache.hadoop.hdfs.TestAppendDifferentChecksum org.apache.hadoop.hdfs.server.namenode.TestHDFSConcat org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage org.apache.hadoop.fs.TestSymlinkHdfsFileContext org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics org.apache.hadoop.hdfs.TestDFSMkdirs org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics org.apache.hadoop.fs.TestGlobPaths org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir org.apache.hadoop.fs.TestHDFSFileContextMainOperations org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery org.apache.hadoop.hdfs.TestDFSRename org.apache.hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA org.apache.hadoop.hdfs.web.TestWebHDFS org.apache.hadoop.hdfs.server.namenode.TestAddBlock org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes org.apache.hadoop.hdfs.server.namenode.TestBackupNode org.apache.hadoop.hdfs.TestDFSUpgrade org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage org.apache.hadoop.hdfs.server.namenode.TestHostsFiles org.apache.hadoop.hdfs.TestParallelUnixDomainRead org.apache.hadoop.fs.viewfs.TestViewFsHdfs org.apache.hadoop.hdfs.web.TestHttpsFileSystem org.apache.hadoop.fs.TestResolveHdfsSymlink org.apache.hadoop.hdfs.TestFileAppend3 org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead org.apache.hadoop.cli.TestAclCLI org.apache.hadoop.hdfs.server.namenode.TestStartup org.apache.hadoop.fs.shell.TestHdfsTextCommand org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen org.apache.hadoop.hdfs.server.datanode.TestDiskError org.apache.hadoop.hdfs.TestBlockReaderLocal org.apache.hadoop.hdfs.TestMultiThreadedHflush org.apache.hadoop.fs.TestFcHdfsPermission org.apache.hadoop.hdfs.web.TestWebHdfsTokens org.apache.hadoop.hdfs.TestConnCache org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint org.apache.hadoop.fs.loadGenerator.TestLoadGenerator org.apache.hadoop.hdfs.server.datanode.TestCachingStrategy org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate org.apache.hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations org.apache.hadoop.fs.TestFcHdfsSetUMask org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication org.apache.hadoop.hdfs.TestLargeBlock org.apache.hadoop.hdfs.TestRenameWhileOpen org.apache.hadoop.fs.TestSymlinkHdfsFileSystem org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool org.apache.hadoop.hdfs.TestModTime org.apache.hadoop.hdfs.server.namenode.TestFileContextAcl org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestInterDatanodeProtocol org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend org.apache.hadoop.hdfs.TestListFilesInFileContext org.apache.hadoop.hdfs.TestDFSRemove org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum org.apache.hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete org.apache.hadoop.fs.permission.TestStickyBit org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.TestBlocksScheduledCounter org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek org.apache.hadoop.cli.TestHDFSCLI org.apache.hadoop.hdfs.server.namenode.TestSequentialBlockId org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration org.apache.hadoop.hdfs.TestDecommission org.apache.hadoop.hdfs.server.datanode.TestHSync org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer org.apache.hadoop.hdfs.TestWriteRead org.apache.hadoop.hdfs.server.namenode.TestINodeFile org.apache.hadoop.hdfs.TestEncryptionZones org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage org.apache.hadoop.hdfs.TestAbandonBlock org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs org.apache.hadoop.cli.TestCryptoAdminCLI org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart org.apache.hadoop.fs.TestUrlStreamHandler org.apache.hadoop.hdfs.TestMissingBlocksAlert org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.TestWriteConfigurationToDFS org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend org.apache.hadoop.fs.TestEnhancedByteBufferAccess org.apache.hadoop.hdfs.web.TestWebHDFSForHA org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestMetaSave org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue org.apache.hadoop.hdfs.TestSmallBlock org.apache.hadoop.hdfs.TestReservedRawPaths org.apache.hadoop.hdfs.TestDFSRollback org.apache.hadoop.hdfs.server.namenode.TestCheckpoint org.apache.hadoop.cli.TestXAttrCLI org.apache.hadoop.hdfs.server.namenode.TestProcessCorruptBlocks org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl org.apache.hadoop.hdfs.web.TestWebHDFSAcl org.apache.hadoop.hdfs.TestLocalDFS org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs org.apache.hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement org.apache.hadoop.hdfs.server.namenode.metrics.TestNNMetricFilesInGetListingOps org.apache.hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled org.apache.hadoop.hdfs.TestCrcCorruption org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA org.apache.hadoop.hdfs.server.datanode.TestTransferRbw org.apache.hadoop.hdfs.TestDatanodeDeath org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages org.apache.hadoop.hdfs.TestRestartDFS org.apache.hadoop.hdfs.TestFileAppend4 org.apache.hadoop.hdfs.TestFileStatus org.apache.hadoop.hdfs.server.namenode.TestFileLimit org.apache.hadoop.fs.TestFcHdfsCreateMkdir org.apache.hadoop.hdfs.server.namenode.TestSnapshotPathINodes org.apache.hadoop.tools.TestJMXGet org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached org.apache.hadoop.hdfs.TestDFSClientFailover org.apache.hadoop.hdfs.TestQuota org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory org.apache.hadoop.hdfs.TestDFSStorageStateRecovery org.apache.hadoop.hdfs.server.namenode.TestFSImage org.apache.hadoop.fs.TestSymlinkHdfsDisable org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA org.apache.hadoop.hdfs.TestClientBlockVerification +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7841//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7841//console This message is automatically generated.
          Hide
          Daryn Sharp added a comment -

          I'm taking a look now. It appears much more complicated than I'd expect.

          Show
          Daryn Sharp added a comment - I'm taking a look now. It appears much more complicated than I'd expect.
          Hide
          Daryn Sharp added a comment -

          I'm tinkering with a patch that should provide a simple api for the plugin. I'll post today for your consideration.

          Show
          Daryn Sharp added a comment - I'm tinkering with a patch that should provide a simple api for the plugin. I'll post today for your consideration.
          Hide
          Jitendra Nath Pandey added a comment -

          A comment on the v7.6 patch:
          We should not use the AuthorizationProviderProxyClientProtocol for default plugin.

          Apart from the above, I am +1 for the patch.

          Daryn Sharp Are you going to provide an alternate implementation soon? If not, would you be ok with Alejandro Abdelnur's latest patch?

          Show
          Jitendra Nath Pandey added a comment - A comment on the v7.6 patch: We should not use the AuthorizationProviderProxyClientProtocol for default plugin. Apart from the above, I am +1 for the patch. Daryn Sharp Are you going to provide an alternate implementation soon? If not, would you be ok with Alejandro Abdelnur 's latest patch?
          Hide
          Daryn Sharp added a comment -

          Sorry, 2.5 issues diverted my attention. I should be able to refocus on this today.

          Show
          Daryn Sharp added a comment - Sorry, 2.5 issues diverted my attention. I should be able to refocus on this today.
          Hide
          Alejandro Abdelnur added a comment -

          Jitendra Nath Pandey, the AuthorizationProviderProxyClientProtocol does not do any harm being there the whole time, I would prefer to keep it to avoid having to propagate to the ClientNamenodeProtocolServerSideTranslatorPB the knowledge of the plugin being used. What is your concern in having it always?

          Show
          Alejandro Abdelnur added a comment - Jitendra Nath Pandey , the AuthorizationProviderProxyClientProtocol does not do any harm being there the whole time, I would prefer to keep it to avoid having to propagate to the ClientNamenodeProtocolServerSideTranslatorPB the knowledge of the plugin being used. What is your concern in having it always?
          Hide
          Jitendra Nath Pandey added a comment -

          My concern is that we are adding another layer which will introduce unnecessary function calls for the users who don't want a plugin. However, the cost should not be much as compared to the cost of an rpc, therefore I will accept your call on this.
          As state previously, I am +1 with the latest patch.

          Show
          Jitendra Nath Pandey added a comment - My concern is that we are adding another layer which will introduce unnecessary function calls for the users who don't want a plugin. However, the cost should not be much as compared to the cost of an rpc, therefore I will accept your call on this. As state previously, I am +1 with the latest patch.
          Hide
          Daryn Sharp added a comment -

          Apologies for my delay, again. Awhile back, Alejandro and I spoke offline about how hooking in at the inode level and/or allowing a plugin to completely override the permission checking logic is detrimental. The latest patch doesn't directly splice into the inodes, but for reference the reason it's bad is because replaying edits, whether at startup or HA standby, should not ever go through the plugin.

          Allowing complete override of the permission checking is bad too. Plugins should not be in control of the traversal or iteration of inodes. The plugin should be just that - a hook for the core permission checking. Otherwise plugins will be fragile when changes are made to path resolution. And/or plugins will have to implement duplicated logic.

          With backlogged feature work I'm doing for optional fine grain locks, path resolution, the locking, and permission checking will all be folded together. In order for locking to work, inodes will be "resolved as you go". About the only way to ensure compatibility is for the permission checker to have a hook to call out to the plugin. The plugin cannot override the core behavior.

          I'll attach an incomplete example patch for how an external plugin can substitute different inode attrs during a resolution. It can be further optimized (I scaled it back) but it demonstrates the basic principle. It doesn't address that argus needs another hook to provide further permission checks but it meets Alejandro's needs. An enhancement for argus can be another jira.

          Show
          Daryn Sharp added a comment - Apologies for my delay, again. Awhile back, Alejandro and I spoke offline about how hooking in at the inode level and/or allowing a plugin to completely override the permission checking logic is detrimental. The latest patch doesn't directly splice into the inodes, but for reference the reason it's bad is because replaying edits, whether at startup or HA standby, should not ever go through the plugin. Allowing complete override of the permission checking is bad too. Plugins should not be in control of the traversal or iteration of inodes. The plugin should be just that - a hook for the core permission checking. Otherwise plugins will be fragile when changes are made to path resolution. And/or plugins will have to implement duplicated logic. With backlogged feature work I'm doing for optional fine grain locks, path resolution, the locking, and permission checking will all be folded together. In order for locking to work, inodes will be "resolved as you go". About the only way to ensure compatibility is for the permission checker to have a hook to call out to the plugin. The plugin cannot override the core behavior. I'll attach an incomplete example patch for how an external plugin can substitute different inode attrs during a resolution. It can be further optimized (I scaled it back) but it demonstrates the basic principle. It doesn't address that argus needs another hook to provide further permission checks but it meets Alejandro's needs. An enhancement for argus can be another jira.
          Hide
          Daryn Sharp added a comment -

          Won't compile, intentionally, due to plugin not being defined.

          Show
          Daryn Sharp added a comment - Won't compile, intentionally, due to plugin not being defined.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12668780/HDFS-6826-permchecker.patch
          against trunk revision 24d920b.

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8029//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12668780/HDFS-6826-permchecker.patch against trunk revision 24d920b. +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8029//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          Allowing complete override of the permission checking is bad too. Plugins should not be in control of the traversal or iteration of inodes. The plugin should be just that - a hook for the core permission checking. Otherwise plugins will be fragile when changes are made to path resolution. And/or plugins will have to implement duplicated logic.

          The path resolution happens before generating the list of inodes for a given path, therefore in the proposed permission check API, the path resolution will not be controlled by the plugin. The iteration of inodes is actually part of the core permission checking because we check permission bits (execute mode or read mode) for intermediate directories, which was I think opted to look more like POSIX. However, a plugin should be able to override that semantics and provide a different model.

          Show
          Jitendra Nath Pandey added a comment - Allowing complete override of the permission checking is bad too. Plugins should not be in control of the traversal or iteration of inodes. The plugin should be just that - a hook for the core permission checking. Otherwise plugins will be fragile when changes are made to path resolution. And/or plugins will have to implement duplicated logic. The path resolution happens before generating the list of inodes for a given path, therefore in the proposed permission check API, the path resolution will not be controlled by the plugin. The iteration of inodes is actually part of the core permission checking because we check permission bits (execute mode or read mode) for intermediate directories, which was I think opted to look more like POSIX. However, a plugin should be able to override that semantics and provide a different model.
          Hide
          Alejandro Abdelnur added a comment -

          Attached, v9, is a functioning patch based on Daryn Sharp idea. It requires a little refining.

          A few comments on the approach:

          • Instead being invasive on the INode classes, it is invasive in the PermissionChecker and in the FSDirectory.
          • It does not allow to replace the permission check logic (v7.x does) (could be done in a follow up JIRA)
          • It does not allow to replace the setter of authz info logic (v7.x does)
          • It seems to be more efficient regarding the times it gets call and how the full path can be inferred by the plugin.
          • Talking with Daryn, he suggested doing some changes in the FSDirectory for creating file status by passing. I'm deferring this to the refactoring he wants to do as it is not that trivial.

          Refining required:

          • How to handle setting of authz info, meaning fully ignoring setter calls for a path managed by a plugin. Else setters have effect. The plugin should be able to prevent those setter calls from happening.
          • subAccess check is a TODO, the stack logic there needs to carry more info for the plugin (but we don't want to do that if there is no plugin).

          The v7.6 and the v9 plugins provide the functionality it is needed for HMS/Sentry. I'm OK either way.

          Could others weight on the preferred approach? I would like to get closure on this ASAP for Sentry to start building on top of it.

          Show
          Alejandro Abdelnur added a comment - Attached, v9, is a functioning patch based on Daryn Sharp idea. It requires a little refining. A few comments on the approach: Instead being invasive on the INode classes, it is invasive in the PermissionChecker and in the FSDirectory . It does not allow to replace the permission check logic (v7.x does) (could be done in a follow up JIRA) It does not allow to replace the setter of authz info logic (v7.x does) It seems to be more efficient regarding the times it gets call and how the full path can be inferred by the plugin. Talking with Daryn, he suggested doing some changes in the FSDirectory for creating file status by passing. I'm deferring this to the refactoring he wants to do as it is not that trivial. Refining required: How to handle setting of authz info, meaning fully ignoring setter calls for a path managed by a plugin. Else setters have effect. The plugin should be able to prevent those setter calls from happening. subAccess check is a TODO, the stack logic there needs to carry more info for the plugin (but we don't want to do that if there is no plugin). The v7.6 and the v9 plugins provide the functionality it is needed for HMS/Sentry. I'm OK either way. Could others weight on the preferred approach? I would like to get closure on this ASAP for Sentry to start building on top of it.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12668911/HDFS-6826v9.patch
          against trunk revision 0ac760a.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.security.TestPermission
          org.apache.hadoop.hdfs.TestDFSShell
          org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
          org.apache.hadoop.security.TestPermissionSymlinks
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
          org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
          org.apache.hadoop.hdfs.TestEncryptionZones
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12668911/HDFS-6826v9.patch against trunk revision 0ac760a. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.security.TestPermission org.apache.hadoop.hdfs.TestDFSShell org.apache.hadoop.hdfs.web.TestWebHDFSXAttr org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot org.apache.hadoop.security.TestPermissionSymlinks org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr org.apache.hadoop.hdfs.TestEncryptionZones org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8036//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          The latest patch doesn't directly splice into the inodes, but for reference the reason it's bad is because replaying edits, whether at startup or HA standby, should not ever go through the plugin.

          Replaying shouldn't be a problem as long as plugins implement the setters in idempotent manner. I think we should add this in the javadoc for the interface.

          About the only way to ensure compatibility is for the permission checker to have a hook to call out to the plugin. The plugin cannot override the core behavior.

          The purpose of the plugin is to allow new permission models. Therefore, it must be able to override the permission checking as well.

          I think v7.6 provide more features and is more inline with the purpose of this jira.

          Show
          Jitendra Nath Pandey added a comment - The latest patch doesn't directly splice into the inodes, but for reference the reason it's bad is because replaying edits, whether at startup or HA standby, should not ever go through the plugin. Replaying shouldn't be a problem as long as plugins implement the setters in idempotent manner. I think we should add this in the javadoc for the interface. About the only way to ensure compatibility is for the permission checker to have a hook to call out to the plugin. The plugin cannot override the core behavior. The purpose of the plugin is to allow new permission models. Therefore, it must be able to override the permission checking as well. I think v7.6 provide more features and is more inline with the purpose of this jira.
          Hide
          Daryn Sharp added a comment -

          The purpose of the plugin is to allow new permission models. Therefore, it must be able to override the permission checking as well.

          Jitendra Nath Pandey, I'm confused. In the offline call we had for me to understand argus, it was specifically stated that argus would not change the existing permissions model. Argus was described as a means to augment the permissions with acls via "policies" that cannot be represented in hdfs such as time/network based, etc.

          Allowing the entire permissions to be modified to be non-posix(ish) compliant is a major change that will break many stack components. A more in depth analysis and discussion of the impact is required. Alejandro simply needs to substitute "fake" inode attributes so hopefully we can postpone to another jira so Alejandro can be unblocked in the short term?

          Show
          Daryn Sharp added a comment - The purpose of the plugin is to allow new permission models. Therefore, it must be able to override the permission checking as well. Jitendra Nath Pandey , I'm confused. In the offline call we had for me to understand argus, it was specifically stated that argus would not change the existing permissions model. Argus was described as a means to augment the permissions with acls via "policies" that cannot be represented in hdfs such as time/network based, etc. Allowing the entire permissions to be modified to be non-posix(ish) compliant is a major change that will break many stack components. A more in depth analysis and discussion of the impact is required. Alejandro simply needs to substitute "fake" inode attributes so hopefully we can postpone to another jira so Alejandro can be unblocked in the short term?
          Hide
          Alejandro Abdelnur added a comment -

          I've forgot to comment on the replay edits concerns, the v7.6 plugin API provides feedback to the plugin implementation indicating if it is within a client filesystem call or not (not is the case of replaying edit).

          Show
          Alejandro Abdelnur added a comment - I've forgot to comment on the replay edits concerns, the v7.6 plugin API provides feedback to the plugin implementation indicating if it is within a client filesystem call or not (not is the case of replaying edit).
          Hide
          Jitendra Nath Pandey added a comment -

          Argus was described as a means to augment the permissions with acls via "policies" that cannot be represented in hdfs such as time/network based, etc.

          As I understand, Argus would also need ability to add policies on files and directories including wild cards.

          Allowing the entire permissions to be modified to be non-posix(ish) compliant is a major change that will break many stack components. A more in depth analysis and discussion of the impact is required.

          The reason for exposing a plugin interface is to allow different organizations to have different models. The default behavior doesn't change at all, therefore there is no risk to the stack. The cluster admins that choose to use their own plugins would do that under their security policy encompassing the stack in their org and would be responsible for compatibility with other components.

          Alejandro simply needs to substitute "fake" inode attributes so hopefully we can postpone to another jira..

          We need to see the use case, faking inode attributes is an implementation. The use case is to externalize the authorization provisioning. In that respect, HMS/Sentry use case is not that far from Argus. Since we have two use cases in this area, it also indicates that users have different security policies and need us to allow using different permission models. Since we are exposing a plugin here, we should look at all the known use cases and think of a design that covers those use cases. Of course, we don't want multiple interfaces and multiple plugins with nuances for the same area of the code and functionality.

          I don't fully understand your reservations on v7.6. This patch covers the known use cases at this point. It doesn't change the default behavior of the system at all. It provides lot more flexibility to the plugin writers to suit their security policies and of course it is their risk. The patch has gone through many iterations and seems to be pretty mature now and will address the needs for both Argus and HMS/Sentry.

          Show
          Jitendra Nath Pandey added a comment - Argus was described as a means to augment the permissions with acls via "policies" that cannot be represented in hdfs such as time/network based, etc. As I understand, Argus would also need ability to add policies on files and directories including wild cards. Allowing the entire permissions to be modified to be non-posix(ish) compliant is a major change that will break many stack components. A more in depth analysis and discussion of the impact is required. The reason for exposing a plugin interface is to allow different organizations to have different models. The default behavior doesn't change at all, therefore there is no risk to the stack. The cluster admins that choose to use their own plugins would do that under their security policy encompassing the stack in their org and would be responsible for compatibility with other components. Alejandro simply needs to substitute "fake" inode attributes so hopefully we can postpone to another jira.. We need to see the use case, faking inode attributes is an implementation. The use case is to externalize the authorization provisioning. In that respect, HMS/Sentry use case is not that far from Argus. Since we have two use cases in this area, it also indicates that users have different security policies and need us to allow using different permission models. Since we are exposing a plugin here, we should look at all the known use cases and think of a design that covers those use cases. Of course, we don't want multiple interfaces and multiple plugins with nuances for the same area of the code and functionality. I don't fully understand your reservations on v7.6. This patch covers the known use cases at this point. It doesn't change the default behavior of the system at all. It provides lot more flexibility to the plugin writers to suit their security policies and of course it is their risk. The patch has gone through many iterations and seems to be pretty mature now and will address the needs for both Argus and HMS/Sentry.
          Hide
          Suresh Srinivas added a comment -

          Daryn Sharp, I have hard time understanding the specific objections you have for v7.6.

          I understand that you are concerned about externalizing permission check. But this jira has talked about some use cases where it is necessary. This is a pluggable mechanism. That has been our way of making sure that if people are concerned about a specific functionality, they can continue with tried and tested default behavior. But for others who want to experiment and add specialization, they can implement non-default behavior. This has been done for many critical parts within HDFS over the years. Given that this is v7.6 is already a pluggable interface and the default behavior does not change, I fail to understand why the existing patch, which has been reviewed in many iterations is not ready.

          Show
          Suresh Srinivas added a comment - Daryn Sharp , I have hard time understanding the specific objections you have for v7.6. I understand that you are concerned about externalizing permission check. But this jira has talked about some use cases where it is necessary. This is a pluggable mechanism. That has been our way of making sure that if people are concerned about a specific functionality, they can continue with tried and tested default behavior. But for others who want to experiment and add specialization, they can implement non-default behavior. This has been done for many critical parts within HDFS over the years. Given that this is v7.6 is already a pluggable interface and the default behavior does not change, I fail to understand why the existing patch, which has been reviewed in many iterations is not ready.
          Hide
          Jitendra Nath Pandey added a comment -

          Daryn Sharp,
          v7.6 patch addresses the use cases we have talked about in this jira. I understand that we are introducing a new interface, and it may need to evolve or need a fix going forward.
          How about we annotate the interface audience as limited private for Argus, Sentry to limit its usage for now? We could also annotate it as unstable so that the interface could evolve and mature over time without compatibility concerns?

          Show
          Jitendra Nath Pandey added a comment - Daryn Sharp , v7.6 patch addresses the use cases we have talked about in this jira. I understand that we are introducing a new interface, and it may need to evolve or need a fix going forward. How about we annotate the interface audience as limited private for Argus, Sentry to limit its usage for now? We could also annotate it as unstable so that the interface could evolve and mature over time without compatibility concerns?
          Hide
          Daryn Sharp added a comment -

          My main concern and opposition is the current approach so tightly couples the plugin with the NN's current implementation that it will impede future features. Too many implementation details are leaked or exposed for plugins that ought not be. I have many changes related to combined path resolution, locking, and permission checks that will be jeopardized.

          Specific issues:

          1. It's all too tightly coupled to the current permission checker implementation
          2. The plugin will break every time a new namesystem method is added
          3. Block operations shouldn't be going through the plugin, should they?
          4. MAJOR: Splicing at the ClientNamenodeProtocolServerSideTranslatorPB level allows webhdfs to sidestep the plugin
          5. Thread locals aren't always performant and they are used heavily primarily to allow the plugin to avoid blowing up edit log processing because...
          6. Splicing into inodes is too low of a level. All authz checks should have already been performed.
          7. Why splice into inode setters since checked exceptions cannot be thrown? Silently ignoring the set will discombobulate the user. Throwing a runtime exception may leave the namesystem in an inconsistent state, thus I don't see the value add.
          8. Setters in DefaultAuthorizationProvider are too fragile. It's exposing too many implementation details as to how inode information is encoded - today.
          9. Using a static for the provider allows one and only one custom authz provider. What if the user wants to use both sentry and argus?

          Now let's look at the two use cases, correct me if I'm wrong:

          • Argus: wants to augment the permission model for acls that cannot be represented in hdfs acls
          • Sentry: wants to return fake user/group/perms to avoid having other stack components own their backing files

          Based our offline discussion of Argus, it should be the easiest to implement based on the explicit agreement it will only augment the permission model. Just needs an additional callout.

          The Sentry use case is a bit dubious but generally simple to add in a manner that doesn't complicate future changes. It just needs to substitute inode attributes prior to the standard permission checks. The file status change may seem like a hack (today) but it will dovetail with fsdir changes to only operate on pre-resolved inodes - not re-resolved paths. File status will simply return the attributes of the last node which will conveniently be the dummy values already substituted by the plugin.

          Show
          Daryn Sharp added a comment - My main concern and opposition is the current approach so tightly couples the plugin with the NN's current implementation that it will impede future features. Too many implementation details are leaked or exposed for plugins that ought not be. I have many changes related to combined path resolution, locking, and permission checks that will be jeopardized. Specific issues: It's all too tightly coupled to the current permission checker implementation The plugin will break every time a new namesystem method is added Block operations shouldn't be going through the plugin, should they? MAJOR: Splicing at the ClientNamenodeProtocolServerSideTranslatorPB level allows webhdfs to sidestep the plugin Thread locals aren't always performant and they are used heavily primarily to allow the plugin to avoid blowing up edit log processing because... Splicing into inodes is too low of a level. All authz checks should have already been performed. Why splice into inode setters since checked exceptions cannot be thrown? Silently ignoring the set will discombobulate the user. Throwing a runtime exception may leave the namesystem in an inconsistent state, thus I don't see the value add. Setters in DefaultAuthorizationProvider are too fragile. It's exposing too many implementation details as to how inode information is encoded - today. Using a static for the provider allows one and only one custom authz provider. What if the user wants to use both sentry and argus? Now let's look at the two use cases, correct me if I'm wrong: Argus: wants to augment the permission model for acls that cannot be represented in hdfs acls Sentry: wants to return fake user/group/perms to avoid having other stack components own their backing files Based our offline discussion of Argus, it should be the easiest to implement based on the explicit agreement it will only augment the permission model. Just needs an additional callout. The Sentry use case is a bit dubious but generally simple to add in a manner that doesn't complicate future changes. It just needs to substitute inode attributes prior to the standard permission checks. The file status change may seem like a hack (today) but it will dovetail with fsdir changes to only operate on pre-resolved inodes - not re-resolved paths. File status will simply return the attributes of the last node which will conveniently be the dummy values already substituted by the plugin.
          Hide
          Alejandro Abdelnur added a comment -

          Daryn Sharp, thanks for the detailed review. A few comments on it.

          On #2, I don’t see how this would happen unless you are adding a new authorization property or a new way of checking permissions independent of the current one.

          On #4, my bad, I’ve missed the special entry point of webhdfs, that is easily fixable.

          ON #5, AFAIK the performance of thread locals has improved significantly since its inception. I don’t think this should be an issue.

          On #8,the DefaultAuthorizationProvider is the implementation provide by Hadoop. Also, its implementation is not ‘public’.

          On #9, I think is a reasonable assumption for now to have a single authz provider. If more than one, that could be done with a multiplexor implementation (as you could assume they will have zero intersection on the HDFS sub-trees they manage.

          Also, I think that because v7.6 works on resolved INodes, we don’t need to worry about snapshots paths or inodes paths.

          Show
          Alejandro Abdelnur added a comment - Daryn Sharp , thanks for the detailed review. A few comments on it. On #2, I don’t see how this would happen unless you are adding a new authorization property or a new way of checking permissions independent of the current one. On #4, my bad, I’ve missed the special entry point of webhdfs, that is easily fixable. ON #5, AFAIK the performance of thread locals has improved significantly since its inception. I don’t think this should be an issue. On #8,the DefaultAuthorizationProvider is the implementation provide by Hadoop. Also, its implementation is not ‘public’. On #9, I think is a reasonable assumption for now to have a single authz provider. If more than one, that could be done with a multiplexor implementation (as you could assume they will have zero intersection on the HDFS sub-trees they manage. Also, I think that because v7.6 works on resolved INodes, we don’t need to worry about snapshots paths or inodes paths.
          Hide
          Suresh Srinivas added a comment -

          Daryn Sharp, I agree with some of the comments you have made. One of the main goals from Ranger (previously Argus) perspective is to allow plugin for checking permission. This will help implement in Argus project a centralized permission model. This is lot smaller and self contained than the current scope of changes. While this discussion can continue, I want to go ahead and create a jira for the narrow scope of pluggability that Ranger is looking for.

          Let me know what you guys think. I will create a separate jira, if there is general consensus.

          Show
          Suresh Srinivas added a comment - Daryn Sharp , I agree with some of the comments you have made. One of the main goals from Ranger (previously Argus) perspective is to allow plugin for checking permission. This will help implement in Argus project a centralized permission model. This is lot smaller and self contained than the current scope of changes. While this discussion can continue, I want to go ahead and create a jira for the narrow scope of pluggability that Ranger is looking for. Let me know what you guys think. I will create a separate jira, if there is general consensus.
          Hide
          Aaron T. Myers added a comment -

          The problem I have with just an FsPermissionChecker plugin is that a user or admin who goes to look at the permissions of the relevant files in HDFS will see completely different permissions than what are in fact being enforced by the NN. I'm certainly not wedded to the current implementation that's been posted on this JIRA, but I feel pretty strongly that any solution that allows for outsourcing the HDFS authorization decisions should also show to the user the permissions that are being enforced. If your proposed alternate implementation supports that, then it'll be fine by me.

          Show
          Aaron T. Myers added a comment - The problem I have with just an FsPermissionChecker plugin is that a user or admin who goes to look at the permissions of the relevant files in HDFS will see completely different permissions than what are in fact being enforced by the NN. I'm certainly not wedded to the current implementation that's been posted on this JIRA, but I feel pretty strongly that any solution that allows for outsourcing the HDFS authorization decisions should also show to the user the permissions that are being enforced. If your proposed alternate implementation supports that, then it'll be fine by me.
          Hide
          Suresh Srinivas added a comment -

          Aaron T. Myers, I see your point. I agree with it. The root cause of this seems to be the place where we are adding pluggability - FSPermissionChecker, which ties the access control implementation to permissions.

          How about this? Lets say we add pluggable AccessControlPolicy that gets called before permission checker. One could have custom policies that can check whether access is allowed based on wild card paths, where the access is coming from etc. and allow the operation.Policy implementation decides to allow access, disallow access, or continue (to FSPermissionChecker). Would that work? Thought?

          Show
          Suresh Srinivas added a comment - Aaron T. Myers , I see your point. I agree with it. The root cause of this seems to be the place where we are adding pluggability - FSPermissionChecker, which ties the access control implementation to permissions. How about this? Lets say we add pluggable AccessControlPolicy that gets called before permission checker. One could have custom policies that can check whether access is allowed based on wild card paths, where the access is coming from etc. and allow the operation.Policy implementation decides to allow access, disallow access, or continue (to FSPermissionChecker). Would that work? Thought?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12668911/HDFS-6826v9.patch
          against trunk revision 7896815.

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8919//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12668911/HDFS-6826v9.patch against trunk revision 7896815. -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8919//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          Lets say we add pluggable AccessControlPolicy that gets called before permission checker. One could have custom policies that can check whether access is allowed based on wild card paths, where the access is coming from etc. and allow the operation.Policy implementation decides to allow access, disallow access, or continue (to FSPermissionChecker).

          +1 to have a pluggable AccessControlPolicy. The policy decides whether we have an externalized authorization, the source of access authorization or whether it is default hdfs permission model. This will let users be aware that authorization is externalized.
          I think with this approach we can separately address the Ranger (formerly called Argus) use case.

          Show
          Jitendra Nath Pandey added a comment - Lets say we add pluggable AccessControlPolicy that gets called before permission checker. One could have custom policies that can check whether access is allowed based on wild card paths, where the access is coming from etc. and allow the operation.Policy implementation decides to allow access, disallow access, or continue (to FSPermissionChecker). +1 to have a pluggable AccessControlPolicy. The policy decides whether we have an externalized authorization, the source of access authorization or whether it is default hdfs permission model. This will let users be aware that authorization is externalized. I think with this approach we can separately address the Ranger (formerly called Argus) use case.
          Hide
          Aaron T. Myers added a comment -

          Just to be completely explicit, would this design allow for the plugin AccessControlPolicy to affect what's returned to the client for results from calls to DistributedFileSystem#listStatus? That's really the crux of what I'm after. If this proposal allows for that, then it'll work for me. I want to make sure that the `hadoop fs -ls ...' output is capable of actually displaying the permissions/ACLs that are being enforced at any moment in time, regardless of what backend policy is in fact determining what those permissions/ACLs are.

          Show
          Aaron T. Myers added a comment - Just to be completely explicit, would this design allow for the plugin AccessControlPolicy to affect what's returned to the client for results from calls to DistributedFileSystem#listStatus ? That's really the crux of what I'm after. If this proposal allows for that, then it'll work for me. I want to make sure that the `hadoop fs -ls ...' output is capable of actually displaying the permissions/ACLs that are being enforced at any moment in time, regardless of what backend policy is in fact determining what those permissions/ACLs are.
          Hide
          Arun Suresh added a comment -

          Hey Alejandro Abdelnur, If you don't mind, may I take over this JIRA ? I have a couple of ideas on an approach for this which I would like to post as a patch.

          Show
          Arun Suresh added a comment - Hey Alejandro Abdelnur , If you don't mind, may I take over this JIRA ? I have a couple of ideas on an approach for this which I would like to post as a patch.
          Hide
          Alejandro Abdelnur added a comment -

          Arun Suresh, sure no prob. Thanks for following up with it.

          Show
          Alejandro Abdelnur added a comment - Arun Suresh , sure no prob. Thanks for following up with it.
          Hide
          Arun Suresh added a comment -

          Uploading a new patch (.10.patch)

          The patch, now rebased with trunk, is based on Daryn Sharp's patch which Alejandro Abdelnur had fleshed out (v9)

          It addresses Jitendra Nath Pandey and Suresh Srinivas's concerns as follows :

          1. It splits the Authorization into two parts. The INodeAttributeProvider can be subclassed so that the implementation can decide what the AclFeatures/Permissions/user/group etc are returned for an INode
          2. It also allows the implementer to specify an AccessControlEnforcer (by overriding the getExternalAccessControlEnforcer() method in the INodeAttributeProvider implementation). This can be used to override the checPermission functionality of the FsPermissionChecker
          3. It plugin is invoked when a client does a getfacl / ls, and thereby ensures it displays what is expected. which I guess was Aaron T. Myers's concern
          4. The Plugin is not invoked in the set Acl /permission path. This is fine for our purposes. But if you guys think it is important, I can provide a fix for that (possibly follow-up JIRA ?).

          Do let me know what you guys think. and If you feel its the right approach, I can proceed to fix it up some more.

          Show
          Arun Suresh added a comment - Uploading a new patch (.10.patch) The patch, now rebased with trunk, is based on Daryn Sharp 's patch which Alejandro Abdelnur had fleshed out (v9) It addresses Jitendra Nath Pandey and Suresh Srinivas 's concerns as follows : It splits the Authorization into two parts. The INodeAttributeProvider can be subclassed so that the implementation can decide what the AclFeatures/Permissions/user/group etc are returned for an INode It also allows the implementer to specify an AccessControlEnforcer (by overriding the getExternalAccessControlEnforcer() method in the INodeAttributeProvider implementation). This can be used to override the checPermission functionality of the FsPermissionChecker It plugin is invoked when a client does a getfacl / ls , and thereby ensures it displays what is expected. which I guess was Aaron T. Myers 's concern The Plugin is not invoked in the set Acl /permission path. This is fine for our purposes. But if you guys think it is important, I can provide a fix for that (possibly follow-up JIRA ?). Do let me know what you guys think. and If you feel its the right approach, I can proceed to fix it up some more.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12703405/HDFS-6826.10.patch
          against trunk revision 5578e22.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion

          The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend
          org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
          org.apache.hadoop.hdfs.TestAppendSnapshotTruncate

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9792//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9792//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12703405/HDFS-6826.10.patch against trunk revision 5578e22. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.qjournal.TestNNWithQJM org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager org.apache.hadoop.hdfs.TestAppendSnapshotTruncate Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9792//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9792//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Will be fixing the test cases.. But I was wondering if I can get a validation as to whether the approach would work ? ping Suresh Srinivas, Jitendra Nath Pandey, Daryn Sharp, Aaron T. Myers

          Show
          Arun Suresh added a comment - Will be fixing the test cases.. But I was wondering if I can get a validation as to whether the approach would work ? ping Suresh Srinivas , Jitendra Nath Pandey , Daryn Sharp , Aaron T. Myers
          Hide
          Jitendra Nath Pandey added a comment -

          Arun Suresh,
          Thanks for the proposal and it looks very good. I would suggest passing the FsPermissionChecker instance in the AccessControlEnforcer#checkPermission interface as well, as the default AccessControlEnforcer. This will give a lot of flexibility to an external authorization module e.g.
          1) The external module can get access to the callerUgi, which would be a must for many implementations to evaluate the callerUgi for ownership or group membership or other policies.
          2) It would also allow an external implementation to fallback to the default implementation if they need to.

          Show
          Jitendra Nath Pandey added a comment - Arun Suresh , Thanks for the proposal and it looks very good. I would suggest passing the FsPermissionChecker instance in the AccessControlEnforcer#checkPermission interface as well, as the default AccessControlEnforcer. This will give a lot of flexibility to an external authorization module e.g. 1) The external module can get access to the callerUgi, which would be a must for many implementations to evaluate the callerUgi for ownership or group membership or other policies. 2) It would also allow an external implementation to fallback to the default implementation if they need to.
          Hide
          Arun Suresh added a comment -

          Thanks for the review Jitendra Nath Pandey
          Uploading a fresh patch incorporating your suggestions and fixing some of the failed TestCases.

          Show
          Arun Suresh added a comment - Thanks for the review Jitendra Nath Pandey Uploading a fresh patch incorporating your suggestions and fixing some of the failed TestCases.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12705753/HDFS-6826.11.patch
          against trunk revision 91baca1.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9988//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12705753/HDFS-6826.11.patch against trunk revision 91baca1. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9988//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Missed a file in my earlier patch..
          Uploading fix.

          Show
          Arun Suresh added a comment - Missed a file in my earlier patch.. Uploading fix.
          Hide
          Arun Suresh added a comment -

          Updating patch to include a default implementation of INodeAttributeProvider in addition to the default AccessControlEnforcer

          Show
          Arun Suresh added a comment - Updating patch to include a default implementation of INodeAttributeProvider in addition to the default AccessControlEnforcer
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12705775/HDFS-6826.12.patch
          against trunk revision e37ca22.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.
          See https://builds.apache.org/job/PreCommit-HDFS-Build/9990//artifact/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead
          org.apache.hadoop.security.TestPermission
          org.apache.hadoop.hdfs.TestDFSShell
          org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
          org.apache.hadoop.hdfs.TestDistributedFileSystem
          org.apache.hadoop.hdfs.web.TestWebHDFS
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot
          org.apache.hadoop.security.TestPermissionSymlinks
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
          org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
          org.apache.hadoop.hdfs.server.namenode.TestFileContextAcl
          org.apache.hadoop.hdfs.TestDFSPermission
          org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
          org.apache.hadoop.fs.TestGlobPaths
          org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr
          org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
          org.apache.hadoop.hdfs.TestFileAppend2
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.TestEncryptionZones
          org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl
          org.apache.hadoop.hdfs.TestFsShellPermission
          org.apache.hadoop.tracing.TestTracing
          org.apache.hadoop.hdfs.server.balancer.TestBalancer
          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
          org.apache.hadoop.hdfs.TestSafeMode
          org.apache.hadoop.hdfs.server.namenode.TestFsck
          org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
          org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
          org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker
          org.apache.hadoop.hdfs.web.TestWebHDFSAcl
          org.apache.hadoop.fs.permission.TestStickyBit

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9990//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9990//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12705775/HDFS-6826.12.patch against trunk revision e37ca22. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/9990//artifact/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead org.apache.hadoop.security.TestPermission org.apache.hadoop.hdfs.TestDFSShell org.apache.hadoop.hdfs.web.TestWebHDFSXAttr org.apache.hadoop.hdfs.TestDistributedFileSystem org.apache.hadoop.hdfs.web.TestWebHDFS org.apache.hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot org.apache.hadoop.security.TestPermissionSymlinks org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs org.apache.hadoop.hdfs.server.namenode.TestFileContextAcl org.apache.hadoop.hdfs.TestDFSPermission org.apache.hadoop.hdfs.server.namenode.TestAuditLogger org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing org.apache.hadoop.fs.TestGlobPaths org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr org.apache.hadoop.hdfs.server.namenode.TestFileTruncate org.apache.hadoop.hdfs.TestFileAppend2 org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.TestEncryptionZones org.apache.hadoop.hdfs.server.namenode.TestNameNodeAcl org.apache.hadoop.hdfs.TestFsShellPermission org.apache.hadoop.tracing.TestTracing org.apache.hadoop.hdfs.server.balancer.TestBalancer org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS org.apache.hadoop.hdfs.TestSafeMode org.apache.hadoop.hdfs.server.namenode.TestFsck org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser org.apache.hadoop.hdfs.server.namenode.TestAuditLogs org.apache.hadoop.hdfs.server.namenode.TestFSPermissionChecker org.apache.hadoop.hdfs.web.TestWebHDFSAcl org.apache.hadoop.fs.permission.TestStickyBit Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9990//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9990//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12705799/HDFS-6826.13.patch
          against trunk revision e37ca22.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.
          See https://builds.apache.org/job/PreCommit-HDFS-Build/9993//artifact/patchprocess/diffJavadocWarnings.txt for details.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.tracing.TestTracing

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9993//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9993//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12705799/HDFS-6826.13.patch against trunk revision e37ca22. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. See https://builds.apache.org/job/PreCommit-HDFS-Build/9993//artifact/patchprocess/diffJavadocWarnings.txt for details. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.tracing.TestTracing Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9993//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9993//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Fixing Javadoc error.
          The remaining test case failure is unrelated

          Show
          Arun Suresh added a comment - Fixing Javadoc error. The remaining test case failure is unrelated
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12705842/HDFS-6826.14.patch
          against trunk revision 4e886eb.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.tracing.TestTracing
          org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9999//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9999//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12705842/HDFS-6826.14.patch against trunk revision 4e886eb. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.tracing.TestTracing org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/9999//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9999//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          Hi Arun Suresh, Thanks for the quick turn around on this.
          I really liked your earlier HDFS-6826.10.patch, because AccessControlEnforcer was a very simple and pure interface.
          Now comparing with the later patches, I think it was a bad idea on my part to expect default AccessControlEnforcer to return callerUgi, supergroup etc, although we need it for default permission checking. The reason is that many implementations might want to re-use the AccessControlEnforcer objects and would like to avoid tracking a callerUgi in their state, even though, they need it for policy enforcement.
          I really prefer AccessControlEnforcer as an interface instead of an abstract class, because abstract class requires the implementations to initialize the base class with many parameters that they don't need to track. Therefore, I would suggest following simple modification on top of HDFS-6826.10.patch. Change the AccessControlEnforcer#checkPermission interface to pass a few additional parameters. In the following snippet, I have added the suggested new parameters at the beginning of the parameter list.

             public static interface AccessControlEnforcer {
          
              public void checkPermission(String fsOwner, String superGroup,
                  UserGroupInformation callerUgi, AccessControlEnforcer defaultEnforcer,
                  INodeAttributes[] inodeAttrs, INode[] inodes,
                  byte[][] pathByNameArr, int snapshotId, String path, int ancestorIndex,
                  boolean doCheckOwner, FsAction ancestorAccess, FsAction parentAccess,
                  FsAction access, FsAction subAccess, boolean ignoreEmptyDir)
                      throws AccessControlException;
          
              }
          

          I think with HDFS-6826.10.patch and the above change, it will be a very clean and simple implementation.

          Thanks again for taking up this work.

          Show
          Jitendra Nath Pandey added a comment - Hi Arun Suresh , Thanks for the quick turn around on this. I really liked your earlier HDFS-6826 .10.patch, because AccessControlEnforcer was a very simple and pure interface. Now comparing with the later patches, I think it was a bad idea on my part to expect default AccessControlEnforcer to return callerUgi, supergroup etc, although we need it for default permission checking. The reason is that many implementations might want to re-use the AccessControlEnforcer objects and would like to avoid tracking a callerUgi in their state, even though, they need it for policy enforcement. I really prefer AccessControlEnforcer as an interface instead of an abstract class, because abstract class requires the implementations to initialize the base class with many parameters that they don't need to track. Therefore, I would suggest following simple modification on top of HDFS-6826 .10.patch. Change the AccessControlEnforcer#checkPermission interface to pass a few additional parameters. In the following snippet, I have added the suggested new parameters at the beginning of the parameter list. public static interface AccessControlEnforcer { public void checkPermission( String fsOwner, String superGroup, UserGroupInformation callerUgi, AccessControlEnforcer defaultEnforcer, INodeAttributes[] inodeAttrs, INode[] inodes, byte [][] pathByNameArr, int snapshotId, String path, int ancestorIndex, boolean doCheckOwner, FsAction ancestorAccess, FsAction parentAccess, FsAction access, FsAction subAccess, boolean ignoreEmptyDir) throws AccessControlException; } I think with HDFS-6826 .10.patch and the above change, it will be a very clean and simple implementation. Thanks again for taking up this work.
          Hide
          Arun Suresh added a comment -

          Jitendra Nath Pandey, yup.. agreed, interfaces are cleaner.. will revert
          uploading new patch (.15.patch) based on v10.. but fixing test cases..

          Show
          Arun Suresh added a comment - Jitendra Nath Pandey , yup.. agreed, interfaces are cleaner.. will revert uploading new patch (.15.patch) based on v10.. but fixing test cases..
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12706099/HDFS-6826.15.patch
          against trunk revision 7f1e2f9.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.tracing.TestTracing

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10017//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10017//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12706099/HDFS-6826.15.patch against trunk revision 7f1e2f9. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.tracing.TestTracing Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10017//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10017//console This message is automatically generated.
          Hide
          Arun Suresh added a comment -

          Minor doc and function parameter changes to establish the fact that the default AccessControlEnforcer is also available to the implementation so that it may be used as a fallback mechanism if needed.

          Show
          Arun Suresh added a comment - Minor doc and function parameter changes to establish the fact that the default AccessControlEnforcer is also available to the implementation so that it may be used as a fallback mechanism if needed.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12706395/HDFS-6826.16.patch
          against trunk revision 4cd54d9.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 2 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.tracing.TestTracing

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10025//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10025//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12706395/HDFS-6826.16.patch against trunk revision 4cd54d9. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 2 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.tracing.TestTracing Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/10025//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/10025//console This message is automatically generated.
          Hide
          Jitendra Nath Pandey added a comment -

          Thanks for the quick turn around, Arun Suresh.
          The latest patch looks pretty good and works for us.
          I want to point out that the API
          public AccessControlEnforcer getExternalAccessControlEnforcer(AccessControlEnforcer defaultEnforcer)
          assumes a one to one mapping between external enforcer and the default enforcer. The FSPermissionChecker is the default enforcer in this case which is instantiated for every request, therefore there is an implicit assumption that the callerUgi in the checkPermission API is same as the callerUgi used to create the FSPermissionChecker that was passed as the default enforcer.
          Therefore, I would prefer passing the default enforcer to the AccessControlEnforcer#checkPermission API as I mentioned in my previous comment, and not pass it to the getExternalAccessControlEnforcer.

          However, I will still be OK if you prefer to stick to the model of one to one mapping between default enforcer object and external enforcer object, where both are tied to the specific file system request.
          Please update with your call on the above.

          +1 pending the decision on the above.

          Show
          Jitendra Nath Pandey added a comment - Thanks for the quick turn around, Arun Suresh . The latest patch looks pretty good and works for us. I want to point out that the API public AccessControlEnforcer getExternalAccessControlEnforcer(AccessControlEnforcer defaultEnforcer) assumes a one to one mapping between external enforcer and the default enforcer. The FSPermissionChecker is the default enforcer in this case which is instantiated for every request, therefore there is an implicit assumption that the callerUgi in the checkPermission API is same as the callerUgi used to create the FSPermissionChecker that was passed as the default enforcer. Therefore, I would prefer passing the default enforcer to the AccessControlEnforcer#checkPermission API as I mentioned in my previous comment, and not pass it to the getExternalAccessControlEnforcer . However, I will still be OK if you prefer to stick to the model of one to one mapping between default enforcer object and external enforcer object, where both are tied to the specific file system request. Please update with your call on the above. +1 pending the decision on the above.
          Hide
          Arun Suresh added a comment -

          Thanks again for the review Jitendra Nath Pandey, Apologize for not clarifying why I dropped the defaultEnforcer from being passed into the checkPermission method.

          If a user is interested only in providing an implementation of INodeAttributeProvider but requires the default enforcer, and assuming getExternalAccessControlEnforcer() does not take any arguments, then she would have to explicitly over-ride the method and either

          1. return null (since the only way to get the default enforcer / FsPermissionChecker instance is via the FsNamesystem or FsDirectory object and I was not comfortable passing either of them to user specified code.. or I would have to set the FsPermissionChecker into the instance if INodeAttributeProvider somehow.. which again, I did not want to do, since I like to avoid setters) .. and then in FsPermissionChecker, we would have to perform a null check of the return value and appropriately call the checkPermission on the correct instance.
          2. or she would have to provide a dummy implementation of AccessControlEnforcer and delegate checkPermission method to the passed in default Enforcer.

          I felt passing the defaultEnforcer into the getExternalAccessControlEnforcer() would probably be cleaner.. So if you don't mind, I would prefer keeping it as it is.
          Hope this made sense ?

          Show
          Arun Suresh added a comment - Thanks again for the review Jitendra Nath Pandey , Apologize for not clarifying why I dropped the defaultEnforcer from being passed into the checkPermission method. If a user is interested only in providing an implementation of INodeAttributeProvider but requires the default enforcer, and assuming getExternalAccessControlEnforcer() does not take any arguments, then she would have to explicitly over-ride the method and either return null (since the only way to get the default enforcer / FsPermissionChecker instance is via the FsNamesystem or FsDirectory object and I was not comfortable passing either of them to user specified code.. or I would have to set the FsPermissionChecker into the instance if INodeAttributeProvider somehow.. which again, I did not want to do, since I like to avoid setters) .. and then in FsPermissionChecker , we would have to perform a null check of the return value and appropriately call the checkPermission on the correct instance. or she would have to provide a dummy implementation of AccessControlEnforcer and delegate checkPermission method to the passed in default Enforcer. I felt passing the defaultEnforcer into the getExternalAccessControlEnforcer() would probably be cleaner.. So if you don't mind, I would prefer keeping it as it is. Hope this made sense ?
          Hide
          Jitendra Nath Pandey added a comment -

          That makes sense.
          +1

          I will commit this tomorrow, unless there is an objection.

          Show
          Jitendra Nath Pandey added a comment - That makes sense. +1 I will commit this tomorrow, unless there is an objection.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #7424 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7424/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #7424 (See https://builds.apache.org/job/Hadoop-trunk-Commit/7424/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          Hide
          Jitendra Nath Pandey added a comment -

          I have committed this to trunk, branch-2 and branch-2.7. Thanks to Arun Suresh for driving this to completion through several iterations on the patch, and thanks to Alejandro Abdelnur for initial versions of the patch.

          Show
          Jitendra Nath Pandey added a comment - I have committed this to trunk, branch-2 and branch-2.7. Thanks to Arun Suresh for driving this to completion through several iterations on the patch, and thanks to Alejandro Abdelnur for initial versions of the patch.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #143 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/143/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #143 (See https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/143/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #877 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/877/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #877 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/877/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #2075 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2075/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #2075 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2075/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #134 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/134/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #134 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/134/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/143/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #143 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/143/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2093 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2093/)
          HDFS-6826. Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2093 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2093/ ) HDFS-6826 . Plugin interface to enable delegation of HDFS authorization assertions. Contributed by Arun Suresh. (jitendra: rev 53a28afe293e5bf185c8d4f2c7aea212e66015c2) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

            People

            • Assignee:
              Arun Suresh
              Reporter:
              Alejandro Abdelnur
            • Votes:
              1 Vote for this issue
              Watchers:
              41 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development