Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: security
    • Labels:
      None
    • Tags:
      security

      Description

      This is an umbrella Jira filing to oversee a set of proposals for introducing a new master service for Hadoop Single Sign On (HSSO).

      There is an increasing need for pluggable authentication providers that authenticate both users and services as well as validate tokens in order to federate identities authenticated by trusted IDPs. These IDPs may be deployed within the enterprise or third-party IDPs that are external to the enterprise.

      These needs speak to a specific pain point: which is a narrow integration path into the enterprise identity infrastructure. Kerberos is a fine solution for those that already have it in place or are willing to adopt its use but there remains a class of user that finds this unacceptable and needs to integrate with a wider variety of identity management solutions.

      Another specific pain point is that of rolling and distributing keys. A related and integral part of the HSSO server is library called the Credential Management Framework (CMF), which will be a common library for easing the management of secrets, keys and credentials.

      Initially, the existing delegation, block access and job tokens will continue to be utilized. There may be some changes required to leverage a PKI based signature facility rather than shared secrets. This is a means to simplify the solution for the pain point of distributing shared secrets.

      This project will primarily centralize the responsibility of authentication and federation into a single service that is trusted across the Hadoop cluster and optionally across multiple clusters. This greatly simplifies a number of things in the Hadoop ecosystem:

      1. a single token format that is used across all of Hadoop regardless of authentication method
      2. a single service to have pluggable providers instead of all services
      3. a single token authority that would be trusted across the cluster/s and through PKI encryption be able to easily issue cryptographically verifiable tokens
      4. automatic rolling of the token authority’s keys and publishing of the public key for easy access by those parties that need to verify incoming tokens
      5. use of PKI for signatures eliminates the need for securely sharing and distributing shared secrets

      In addition to serving as the internal Hadoop SSO service this service will be leveraged by the Knox Gateway from the cluster perimeter in order to acquire the Hadoop cluster tokens. The same token mechanism that is used for internal services will be used to represent user identities. Providing for interesting scenarios such as SSO across Hadoop clusters within an enterprise and/or into the cloud.

      The HSSO service will be comprised of three major components and capabilities:

      1. Federating IDP – authenticates users/services and issues the common Hadoop token
      2. Federating SP – validates the token of trusted external IDPs and issues the common Hadoop token
      3. Token Authority – management of the common Hadoop tokens – including:
      a. Issuance
      b. Renewal
      c. Revocation

      As this is a meta Jira for tracking this overall effort, the details of the individual efforts will be submitted along with the child Jira filings.

      Hadoop-Common would seem to be the most appropriate home for such a service and its related common facilities. We will also leverage and extend existing common mechanisms as appropriate.

        Issue Links

          Activity

          Hide
          Andrew Purtell added a comment -

          Does this not duplicate HADOOP-9392 exactly? May be you have seen that one yet? So we will have one SSO coming out of HADOOP-9392 and another coming out of Knox via this JIRA? It might be good to have two competing alternatives (or more), but I wonder if there a way to do this together on HADOOP-9392, since we are clearly working on exactly the same objective.

          Show
          Andrew Purtell added a comment - Does this not duplicate HADOOP-9392 exactly? May be you have seen that one yet? So we will have one SSO coming out of HADOOP-9392 and another coming out of Knox via this JIRA? It might be good to have two competing alternatives (or more), but I wonder if there a way to do this together on HADOOP-9392 , since we are clearly working on exactly the same objective.
          Hide
          Larry McCay added a comment -

          Hi Andrew - thanks for the pointer. I have seen 9392 and absolutely believe that they should be accomplished together and that they compliment one another. I don't believe that 9392 spells out a central master service for SSO. It correctly targets the need for pluggable authentication providers and a common token validation mechanism. The difference is that we don't want to be bothered with many authentication providers at each service.

          Perhaps, I have missed that objective in the 9392 proposal? If that is so, please point me to the plans for a new master service and it's responsibilities. We will certainly need to align on that goal as it will have impact on the work at the perimeter being done within Knox.

          Show
          Larry McCay added a comment - Hi Andrew - thanks for the pointer. I have seen 9392 and absolutely believe that they should be accomplished together and that they compliment one another. I don't believe that 9392 spells out a central master service for SSO. It correctly targets the need for pluggable authentication providers and a common token validation mechanism. The difference is that we don't want to be bothered with many authentication providers at each service. Perhaps, I have missed that objective in the 9392 proposal? If that is so, please point me to the plans for a new master service and it's responsibilities. We will certainly need to align on that goal as it will have impact on the work at the perimeter being done within Knox.
          Hide
          Daryn Sharp added a comment -

          I think is also going to overlap with pluggable SASL and RPCv9 changes for client and server to correctly negotiate a SASL protocol. I've got a lot of questions but do you have a simple arch doc, preferably even just a picture, that shows the expected network topology and interactions between the various services in hadoop that make this SSO unique?

          Show
          Daryn Sharp added a comment - I think is also going to overlap with pluggable SASL and RPCv9 changes for client and server to correctly negotiate a SASL protocol. I've got a lot of questions but do you have a simple arch doc, preferably even just a picture, that shows the expected network topology and interactions between the various services in hadoop that make this SSO unique?
          Hide
          Andrew Purtell added a comment -

          Having a central master service for SSO is a design choice. HADOOP-9392 proposes a pluggable design exactly because a central master service for SSO is not a solution for all environments. This JIRA is a nice clearly defined subset of the work for HADOOP-9392, however. Isn't this work appropriately a subtask of HADOOP-9392? I think you are describing it as such, please correct me if I am mistaken. The title of this JIRA and that of HADOOP-9392 are almost exactly the same, and largely the goals for this JIRA are already captured under HADOOP-9392 i.e. token based authentication and SSO. We should endeavor to resolve the duplication as shared community effort.

          Show
          Andrew Purtell added a comment - Having a central master service for SSO is a design choice. HADOOP-9392 proposes a pluggable design exactly because a central master service for SSO is not a solution for all environments. This JIRA is a nice clearly defined subset of the work for HADOOP-9392 , however. Isn't this work appropriately a subtask of HADOOP-9392 ? I think you are describing it as such, please correct me if I am mistaken. The title of this JIRA and that of HADOOP-9392 are almost exactly the same, and largely the goals for this JIRA are already captured under HADOOP-9392 i.e. token based authentication and SSO. We should endeavor to resolve the duplication as shared community effort.
          Hide
          Larry McCay added a comment -

          Hi Daryn - Yes, I believe that an effort like this necessarily overlaps with tasks like that and I fully expect many questions. I will need some additional information of the SASL/RPCv9 work that you reference in order to share that context though. I'm interested your kerberos and token decoupling work as well. I'm currently in the process of refining my simple architecture pictures and will be posting them today or tomorrow. I need to try and align certain terms with other efforts already underway.

          Show
          Larry McCay added a comment - Hi Daryn - Yes, I believe that an effort like this necessarily overlaps with tasks like that and I fully expect many questions. I will need some additional information of the SASL/RPCv9 work that you reference in order to share that context though. I'm interested your kerberos and token decoupling work as well. I'm currently in the process of refining my simple architecture pictures and will be posting them today or tomorrow. I need to try and align certain terms with other efforts already underway.
          Hide
          Daryn Sharp added a comment -

          Maybe all relevant parties should meet up during the Hadoop Summit, assuming we're all going?

          In a tiny nutshell, although maybe not so seemingly small:

          • support multiple SASL mechanisms
          • support negotiation of SASL mechanisms
          • support multiple protocols per mechanism
          • add server id hints for sasl clients
            • support kerberos auth to servers with arbitrary service principals
            • completely decouple host/ip from tokens
          • aforementioned supports servers with multiple NICs
          • clients may access a server via any hostname, ip, or even CNAME for a server

          Currently a RPC client will based upon its own config decide the one and only SASL auth mechanism to use. If the server doesn't support that mechanism, the only option the server has is reject or tell the client to do simple (insecure) auth. Pre-connection, the client guesses if it can use and token a assumes it can find it based on host or ip.

          This does not work in a heterogeneous security environment. If client supports A & B auth, server does only B: if the client dictates A, the server must reject because there's no way to negotiate B. The client is also ill equipped to know if it has a token w/o a server hint.

          Kerberos authentication does not work across realms w/o cross-realm trust. The client cannot connect to those NNs using different realms because the client assumes all NN service principals can be divined by subbing _HOST in user/_HOST@REALM.

          I'm changing the sasl auth sequence so the server advertises mechanisms in preferred order. Client instantiates the first client that it supports. Using the javax sasl factory framework, we can decouple hardcoded instantiations of the sasl clients in order to support multiple auth methods that are dynamically loaded.

          Auth methods and SASL mechanisms are hardcoded with support for one and only one auth method per mechanism. I'm extending the SASL negotiation to use both mechanism & protocol so we can support protocols over DIGEST-MD5 other than delegation token. This would allow someone to implement ldap.

          The client no longer assumes it knows the token required by the server, or the service principal for kerberos. The server's advertisement of mechanisms will return DIGEST-MD5/token/server-id if it supports tokens, or GSSAPI/krb5/service-principal. The client will use the server-id to find a token, or the correct service principal to get a TGS. This enables support for multiple NICs, ips, hostnames, etc. CNAMEs will also be supported.

          This could eventually lead to the client authenticating on demand instead of assuming it knows how to login at startup.

          Show
          Daryn Sharp added a comment - Maybe all relevant parties should meet up during the Hadoop Summit, assuming we're all going? In a tiny nutshell, although maybe not so seemingly small: support multiple SASL mechanisms support negotiation of SASL mechanisms support multiple protocols per mechanism add server id hints for sasl clients support kerberos auth to servers with arbitrary service principals completely decouple host/ip from tokens aforementioned supports servers with multiple NICs clients may access a server via any hostname, ip, or even CNAME for a server Currently a RPC client will based upon its own config decide the one and only SASL auth mechanism to use. If the server doesn't support that mechanism, the only option the server has is reject or tell the client to do simple (insecure) auth. Pre-connection, the client guesses if it can use and token a assumes it can find it based on host or ip. This does not work in a heterogeneous security environment. If client supports A & B auth, server does only B: if the client dictates A, the server must reject because there's no way to negotiate B. The client is also ill equipped to know if it has a token w/o a server hint. Kerberos authentication does not work across realms w/o cross-realm trust. The client cannot connect to those NNs using different realms because the client assumes all NN service principals can be divined by subbing _HOST in user/_HOST@REALM. I'm changing the sasl auth sequence so the server advertises mechanisms in preferred order. Client instantiates the first client that it supports. Using the javax sasl factory framework, we can decouple hardcoded instantiations of the sasl clients in order to support multiple auth methods that are dynamically loaded. Auth methods and SASL mechanisms are hardcoded with support for one and only one auth method per mechanism. I'm extending the SASL negotiation to use both mechanism & protocol so we can support protocols over DIGEST-MD5 other than delegation token. This would allow someone to implement ldap. The client no longer assumes it knows the token required by the server, or the service principal for kerberos. The server's advertisement of mechanisms will return DIGEST-MD5/token/server-id if it supports tokens, or GSSAPI/krb5/service-principal. The client will use the server-id to find a token, or the correct service principal to get a TGS. This enables support for multiple NICs, ips, hostnames, etc. CNAMEs will also be supported. This could eventually lead to the client authenticating on demand instead of assuming it knows how to login at startup.
          Hide
          Larry McCay added a comment -

          Daryn - That sounds like great work. It seems to me that we need to insure
          that we can consume this within the SSO server as well as at the service
          level. I am also interested in the client discovery of authentication
          mechanisms. We will likely want to extend that to include NON-SASL based
          mechanisms as well. We have REST clients with various OAuth-type
          requirements as well.

          A get together at Hadoop Summit sounds like a great idea! I'll see what I
          can arrange.

          Show
          Larry McCay added a comment - Daryn - That sounds like great work. It seems to me that we need to insure that we can consume this within the SSO server as well as at the service level. I am also interested in the client discovery of authentication mechanisms. We will likely want to extend that to include NON-SASL based mechanisms as well. We have REST clients with various OAuth-type requirements as well. A get together at Hadoop Summit sounds like a great idea! I'll see what I can arrange.
          Hide
          Owen O'Malley added a comment -

          Andrew, Having a SSO server for Hadoop seems pretty complimentary to the work that had previously been in HADOOP-9392. Certainly in many enterprises having a single server where you need to plug in authentication is much easier than using Kerberos for everything.

          Show
          Owen O'Malley added a comment - Andrew, Having a SSO server for Hadoop seems pretty complimentary to the work that had previously been in HADOOP-9392 . Certainly in many enterprises having a single server where you need to plug in authentication is much easier than using Kerberos for everything.
          Hide
          Andrew Purtell added a comment -

          Owen O'Malley If the goal of this issue is to provide a centralized SSO server for Hadoop, let's rename the JIRA and/or make it a subtask of HADOOP-9392 so as a result there are not two issues seemingly proposing the same high level goals. As you will note on HADOOP-9392, providing a common token based authentication framework to decouple internal user and service authentication from external mechanisms used to support it (like Kerberos) is already a part of the goal for HADOOP-9392. So, if this issue is only proposing a subset of that work, let's make that clear for contributors and the community.

          Show
          Andrew Purtell added a comment - Owen O'Malley If the goal of this issue is to provide a centralized SSO server for Hadoop, let's rename the JIRA and/or make it a subtask of HADOOP-9392 so as a result there are not two issues seemingly proposing the same high level goals. As you will note on HADOOP-9392 , providing a common token based authentication framework to decouple internal user and service authentication from external mechanisms used to support it (like Kerberos) is already a part of the goal for HADOOP-9392 . So, if this issue is only proposing a subset of that work, let's make that clear for contributors and the community.
          Hide
          Larry McCay added a comment -

          I have just attached an overview of the client's interaction with HSSO. This document represents the cumulative thinking and discussion between myself <lmccay@hortonworks.com>, <kevin.minder@hortonworks.com>, Dilli Arumugam <darumugam@hortonworks.com>, Kyle Leckie <kyleckie@microsoft.com> and Brian Swan <Brian.Swan@microsoft.com>.

          These discussions and collaborative work have been focused on usecases that include: cloud, on-premise and hybrid enterprise deployments as well as 3rd party integration scenarios.

          It is intended to be a description of how the client's interaction with HSSO within the Hadoop cluster would look.

          It touches on some of the implementation details of this approach but leaves the majority of those details to be covered in future documents focused on specific aspects of this effort.

          We hope this to be a concise and understandable overview of the vision behind the HSSO effort. We also hope it to be a catalyst for discussions on how to rationalize related work and collaborate on the delivery of it.

          Please feel free to comment here and/or on the mailing lists with any questions, concerns or insight into your own perspectives.

          The next document will provide a similar overview into the use of HSSO for in-cluster trust relationships and service authentication.

          Show
          Larry McCay added a comment - I have just attached an overview of the client's interaction with HSSO. This document represents the cumulative thinking and discussion between myself <lmccay@hortonworks.com>, <kevin.minder@hortonworks.com>, Dilli Arumugam <darumugam@hortonworks.com>, Kyle Leckie <kyleckie@microsoft.com> and Brian Swan <Brian.Swan@microsoft.com>. These discussions and collaborative work have been focused on usecases that include: cloud, on-premise and hybrid enterprise deployments as well as 3rd party integration scenarios. It is intended to be a description of how the client's interaction with HSSO within the Hadoop cluster would look. It touches on some of the implementation details of this approach but leaves the majority of those details to be covered in future documents focused on specific aspects of this effort. We hope this to be a concise and understandable overview of the vision behind the HSSO effort. We also hope it to be a catalyst for discussions on how to rationalize related work and collaborate on the delivery of it. Please feel free to comment here and/or on the mailing lists with any questions, concerns or insight into your own perspectives. The next document will provide a similar overview into the use of HSSO for in-cluster trust relationships and service authentication.
          Hide
          Larry McCay added a comment -

          pdf version.

          Show
          Larry McCay added a comment - pdf version.
          Hide
          Brian Swan added a comment -

          Thanks for the heavy lifting on this, Larry. This issue is headed in the right direction. As you point out, it addresses the use cases of cloud, on-premises, hybrid enterprise deployments, and 3rd party integration scenarios - all of which are broadly important and are also our (Microsoft) high priority scenarios.

          To Daryn's comment about getting together at Hadoop Summit - I think that's a great idea. I plan to be there and look forward to discussions on how to rationalize related work and collaborate on the delivery of it.

          Show
          Brian Swan added a comment - Thanks for the heavy lifting on this, Larry. This issue is headed in the right direction. As you point out, it addresses the use cases of cloud, on-premises, hybrid enterprise deployments, and 3rd party integration scenarios - all of which are broadly important and are also our (Microsoft) high priority scenarios. To Daryn's comment about getting together at Hadoop Summit - I think that's a great idea. I plan to be there and look forward to discussions on how to rationalize related work and collaborate on the delivery of it.
          Hide
          Kyle Leckie added a comment -

          Daryn
          Great point on negotiation. One concern I have with SASL or other forms of mechanism negotiation is that the authenticity of the mechanism can be confirmed in order to avoid downgrade attacks. Would this assume TLS or some other mechanism?

          Show
          Kyle Leckie added a comment - Daryn Great point on negotiation. One concern I have with SASL or other forms of mechanism negotiation is that the authenticity of the mechanism can be confirmed in order to avoid downgrade attacks. Would this assume TLS or some other mechanism?
          Hide
          Daryn Sharp added a comment -

          As best I can tell, the general approach here is to replace the kerberos TGT with a conceptually equivalent IDP TGT? If so, let's call it a IDP (S)ervice(G)ranting(T)oken for a moment. The client requests a IDP-SGT and then presents this IDP-SGT to the NN/RM/etc, in place of a kerberos TGT, to acquire the standard tokens it does today?

          One concern I have with SASL or other forms of mechanism negotiation is that the authenticity of the mechanism can be confirmed in order to avoid downgrade attacks. Would this assume TLS or some other mechanism?

          I don't believe there's any reason why SASL can't occur over SSL. At least with DIGEST-MD5, I don't believe SSL is strictly necessary since it's designed to avoid man-in-the-middle attacks. The server doesn't have direct access to the user supplied password. SASL does provide the ability to go into an encrypted mode (ex. SSL) post-authentication. Hadoop appears to support this but I'm not sure how well tested it is.

          What is the scenario you envision for downgrade attacks? A server requests LDAP so it can acquire the user's password? I suppose this is where a IDP-SGT would come in handy, but I'm unclear (maybe I missed it in the doc) how a PKI-based SGT addresses this issue. How does is the SGT verified by a service, like the NN, such that it can't capture and reuse the SGT?

          Show
          Daryn Sharp added a comment - As best I can tell, the general approach here is to replace the kerberos TGT with a conceptually equivalent IDP TGT? If so, let's call it a IDP (S)ervice(G)ranting(T)oken for a moment. The client requests a IDP-SGT and then presents this IDP-SGT to the NN/RM/etc, in place of a kerberos TGT, to acquire the standard tokens it does today? One concern I have with SASL or other forms of mechanism negotiation is that the authenticity of the mechanism can be confirmed in order to avoid downgrade attacks. Would this assume TLS or some other mechanism? I don't believe there's any reason why SASL can't occur over SSL. At least with DIGEST-MD5, I don't believe SSL is strictly necessary since it's designed to avoid man-in-the-middle attacks. The server doesn't have direct access to the user supplied password. SASL does provide the ability to go into an encrypted mode (ex. SSL) post-authentication. Hadoop appears to support this but I'm not sure how well tested it is. What is the scenario you envision for downgrade attacks? A server requests LDAP so it can acquire the user's password? I suppose this is where a IDP-SGT would come in handy, but I'm unclear (maybe I missed it in the doc) how a PKI-based SGT addresses this issue. How does is the SGT verified by a service, like the NN, such that it can't capture and reuse the SGT?
          Hide
          Larry McCay added a comment -

          Good questions.

          Your characterization is very close to what we have in mind. However, instead of the SGT being presented to the services by the client it is only used to interact with the HSSO service in order to acquire a service specific access token. This token is then used to interact with the target service until it expires. I believe that HSSO's role here is much like the authentication server's role with kerberos. The client only presents the client_to_server_ticket (service ticket) to the actual service - right?

          I will let Kyle respond to the SASL and downgrade attack questions.

          Show
          Larry McCay added a comment - Good questions. Your characterization is very close to what we have in mind. However, instead of the SGT being presented to the services by the client it is only used to interact with the HSSO service in order to acquire a service specific access token. This token is then used to interact with the target service until it expires. I believe that HSSO's role here is much like the authentication server's role with kerberos. The client only presents the client_to_server_ticket (service ticket) to the actual service - right? I will let Kyle respond to the SASL and downgrade attack questions.
          Hide
          Larry McCay added a comment -

          @Andrew - In preparation for our soon to be announced security session get together, I've spent some time trying to reconcile this HSSO Jira as a subtask of HADOOP-9392.
          While I'm not opposed to making it a subtask, I'm just not able to discern from the current design document posted for HADOOP-9392 exactly how it would fit as a subtask.
          Going into a design session with these two already aligned at the highest levels would probably be a good goal rather than try and get there during that meeting.
          If we could get a design document that is more focused on the space that HADOOP-9533 is addressing then I think we could great progress before the summit.
          It would also be helpful to be aware of what the other envisioned subtasks are for this effort. Without knowing how HSSO fits in as a subtask alone and along with others - I can't quite connect all the dots yet. Filing Jiras for your envisioned subtasks would probably be advantageous.
          FYI - we are also in the process of determining the highest level goals, threats and objectives for an authentication system that would have to replace kerberos as central to Hadoop. This will be communicated separately - so that we can collaborate on those up front. This set of canonical goals could then serve as the foundation for our converged design work at the summit and beyond.

          Show
          Larry McCay added a comment - @Andrew - In preparation for our soon to be announced security session get together, I've spent some time trying to reconcile this HSSO Jira as a subtask of HADOOP-9392 . While I'm not opposed to making it a subtask, I'm just not able to discern from the current design document posted for HADOOP-9392 exactly how it would fit as a subtask. Going into a design session with these two already aligned at the highest levels would probably be a good goal rather than try and get there during that meeting. If we could get a design document that is more focused on the space that HADOOP-9533 is addressing then I think we could great progress before the summit. It would also be helpful to be aware of what the other envisioned subtasks are for this effort. Without knowing how HSSO fits in as a subtask alone and along with others - I can't quite connect all the dots yet. Filing Jiras for your envisioned subtasks would probably be advantageous. FYI - we are also in the process of determining the highest level goals, threats and objectives for an authentication system that would have to replace kerberos as central to Hadoop. This will be communicated separately - so that we can collaborate on those up front. This set of canonical goals could then serve as the foundation for our converged design work at the summit and beyond.
          Hide
          Kyle Leckie added a comment -

          Hi Daryn,
          Yes SASL could occur over SSL. With SSL we get protection from eavesdropping, tampering and possibly server authentication. With that we can pass the a bearer token over the network. Performing the SASL exchange would only slow down a request.

          In addition the JAVA SASL mechanisms seem out of date. (see: "Moving DIGEST-MD5 to Historic" http://tools.ietf.org/html/rfc6331). This also describes the issues with downgrade attacks. If we are going to bet on a piece of code that needs to be updated, performant and promptly patched I would bet on the TLS code.

          The SGT will only be handed to the HSSO and not the services such as the NN. The NN would get an NN specific token.

          Kyle

          Show
          Kyle Leckie added a comment - Hi Daryn, Yes SASL could occur over SSL. With SSL we get protection from eavesdropping, tampering and possibly server authentication. With that we can pass the a bearer token over the network. Performing the SASL exchange would only slow down a request. In addition the JAVA SASL mechanisms seem out of date. (see: "Moving DIGEST-MD5 to Historic" http://tools.ietf.org/html/rfc6331 ). This also describes the issues with downgrade attacks. If we are going to bet on a piece of code that needs to be updated, performant and promptly patched I would bet on the TLS code. The SGT will only be handed to the HSSO and not the services such as the NN. The NN would get an NN specific token. – Kyle
          Hide
          Daryn Sharp added a comment -

          Initial thoughts are passing a bearer token over the network creates a weakest link - it's dangerous if a rogue or untrustworthy service is involved. SASL can be leveraged to avoid passing the actual bearer token over the network, although that might complicate the desired design.

          The nice thing about the SASL abstraction is mechanisms are easily swapped out w/o writing code. While I'm not opposed to SSL/TLS, relying solely on SSL (at least openssl) for authentication may not be a good idea since it seems to be rife with security flaws. I'd consider SSL an extra security layer, not the only security layer.

          Show
          Daryn Sharp added a comment - Initial thoughts are passing a bearer token over the network creates a weakest link - it's dangerous if a rogue or untrustworthy service is involved. SASL can be leveraged to avoid passing the actual bearer token over the network, although that might complicate the desired design. The nice thing about the SASL abstraction is mechanisms are easily swapped out w/o writing code. While I'm not opposed to SSL/TLS, relying solely on SSL (at least openssl) for authentication may not be a good idea since it seems to be rife with security flaws. I'd consider SSL an extra security layer, not the only security layer.
          Hide
          Kevin Minder added a comment -

          I'm happy to announce that we have secured a time slot and dedicated space during Hadoop Summit NA dedicated to forward looking Hadoop security design collaboration. Currently, a room has been allocated on the 26th from 1:45 to 3:30 PT. Specific location will be available at the Summit and any changes in date or time will be announced publicly to the best of our abilities. In order to create a manageable agenda for this session, I'd like to schedule some prep meetings via meetup.com to start discussions and preparations with those that would be interested in co-organizing the session.

          Show
          Kevin Minder added a comment - I'm happy to announce that we have secured a time slot and dedicated space during Hadoop Summit NA dedicated to forward looking Hadoop security design collaboration. Currently, a room has been allocated on the 26th from 1:45 to 3:30 PT. Specific location will be available at the Summit and any changes in date or time will be announced publicly to the best of our abilities. In order to create a manageable agenda for this session, I'd like to schedule some prep meetings via meetup.com to start discussions and preparations with those that would be interested in co-organizing the session.
          Hide
          Kevin Minder added a comment -

          Logistics for remote attendance will also be announce publicly when we have that figured out. We won't be making any decisions about security at either any prep or the Summit sessions and detailed summaries will be provided here for those that cannot attend.

          Show
          Kevin Minder added a comment - Logistics for remote attendance will also be announce publicly when we have that figured out. We won't be making any decisions about security at either any prep or the Summit sessions and detailed summaries will be provided here for those that cannot attend.
          Hide
          Kyle Leckie added a comment -

          Hi Daryn,
          I have assumed that the issues with TLS will be:
          1) Key management
          2) Possible performance degradation

          With the intensive use and reliance on the JDKs implementations of TLS I don't expect any known unpatched issues. Older versions of the protocol have weaknesses but we can enforce TLS 1.1+.

          Kyle

          Show
          Kyle Leckie added a comment - Hi Daryn, I have assumed that the issues with TLS will be: 1) Key management 2) Possible performance degradation With the intensive use and reliance on the JDKs implementations of TLS I don't expect any known unpatched issues. Older versions of the protocol have weaknesses but we can enforce TLS 1.1+. – Kyle
          Hide
          Kevin Minder added a comment -

          Although meetup.com was recommended to me as a mechanism to schedule a discussion, that doesn't really seem like it will work since this needs to be a virtual. I've schedule a Google Hangout for 12pmPT on Wednesday 6/12. https://plus.google.com/hangouts/_/calendar/a2V2aW4ubWluZGVyQGhvcnRvbndvcmtzLmNvbQ.qa0og2a0gaag9djeviv2rai63c
          I'm happy to move this around based on availability of those interested. I'm just not sure of the timezones involved. You can email my apache account (kminder at apache) or my jira profile address if you don't want that info here.
          At any rate for this "pre-meeting", I'd like to discuss what everyone would like to get out of the our time at the Summit and how we can prepare in advance. To seed this I think there are a few things we need to nail down before we get there.
          1) The scope of the discussion
          2) The basic goals/requirements from various perspectives
          3) Agreement on the design discussion logistics (we only have two hours)
          At Summit we can:
          1) Discuss design approaches. I want to stress that these discussions need to be at a fairly high level given the time allocation. Ideally we would have been able to cover this already here but we are rapidly running out of time.
          2) Discuss a general implementation approach for any change of this nature
          3) Discuss rollout expectations (e.g. Hadoop ?.?)

          Show
          Kevin Minder added a comment - Although meetup.com was recommended to me as a mechanism to schedule a discussion, that doesn't really seem like it will work since this needs to be a virtual. I've schedule a Google Hangout for 12pmPT on Wednesday 6/12. https://plus.google.com/hangouts/_/calendar/a2V2aW4ubWluZGVyQGhvcnRvbndvcmtzLmNvbQ.qa0og2a0gaag9djeviv2rai63c I'm happy to move this around based on availability of those interested. I'm just not sure of the timezones involved. You can email my apache account (kminder at apache) or my jira profile address if you don't want that info here. At any rate for this "pre-meeting", I'd like to discuss what everyone would like to get out of the our time at the Summit and how we can prepare in advance. To seed this I think there are a few things we need to nail down before we get there. 1) The scope of the discussion 2) The basic goals/requirements from various perspectives 3) Agreement on the design discussion logistics (we only have two hours) At Summit we can: 1) Discuss design approaches. I want to stress that these discussions need to be at a fairly high level given the time allocation. Ideally we would have been able to cover this already here but we are rapidly running out of time. 2) Discuss a general implementation approach for any change of this nature 3) Discuss rollout expectations (e.g. Hadoop ?.?)
          Hide
          Kevin Minder added a comment -

          I also added this gho for the meeting today here http://gphangouts.com/google/hangout/general/109294359812907561436/

          Show
          Kevin Minder added a comment - I also added this gho for the meeting today here http://gphangouts.com/google/hangout/general/109294359812907561436/
          Hide
          Larry McCay added a comment -

          A thank you to those that attended the prep-call yesterday for the summit security session. While not all interested parties were able to make it to this call, we were able to lay some groundwork for moving forward in being prepared. We intend to schedule another call for next week at a more globally appropriate time. In the mean time, the following is a summary of the call from yesterday and should be used to frame the agenda for the next call.

          Prep-call Summary

          Introductions

          Community driven collaboration examples

          • HDFS-HA as a successful model
          • break out concrete areas that can be worked on by different parties but are aligned and complimentary
          • HDFS-HA apparently did this between at least two contributing parties with functionality separated into things like:
            a. client failover/recovery
            b. transaction journalling to support the recovery

          Roadmap to prepare for summit:

          • Describe overall end-state goals for the Hadoop Security Model for Authentication (keep the scope focused on authn)
          • Canonical security concerns and threats for an authentication system that is an alternative to kerberos
          • Describe the various tasks/projects that are required for reaching our goals
          • reconcile existing Jiras as subtasks of others as appropriate

          Ideally at summit we will be able to focus on:

          • Identify a phased approach to reaching our goals
          • Identify the best form of collaboration model for the effort
          • Identify natural seams of separation for collaboration
          • Interested contributors commit to specific aspects of the effort
          Show
          Larry McCay added a comment - A thank you to those that attended the prep-call yesterday for the summit security session. While not all interested parties were able to make it to this call, we were able to lay some groundwork for moving forward in being prepared. We intend to schedule another call for next week at a more globally appropriate time. In the mean time, the following is a summary of the call from yesterday and should be used to frame the agenda for the next call. Prep-call Summary Introductions Community driven collaboration examples HDFS-HA as a successful model break out concrete areas that can be worked on by different parties but are aligned and complimentary HDFS-HA apparently did this between at least two contributing parties with functionality separated into things like: a. client failover/recovery b. transaction journalling to support the recovery Roadmap to prepare for summit: Describe overall end-state goals for the Hadoop Security Model for Authentication (keep the scope focused on authn) Canonical security concerns and threats for an authentication system that is an alternative to kerberos add as document or subtask of https://issues.apache.org/jira/browse/HADOOP-9621 Describe the various tasks/projects that are required for reaching our goals reconcile existing Jiras as subtasks of others as appropriate Ideally at summit we will be able to focus on: Identify a phased approach to reaching our goals Identify the best form of collaboration model for the effort Identify natural seams of separation for collaboration Interested contributors commit to specific aspects of the effort
          Hide
          eric baldeschwieler added a comment -

          As always... This is a proposal for discussion and refinement. Decisions
          are made via discussions such as this Jira


          E14 - via thumbs on glass

          Show
          eric baldeschwieler added a comment - As always... This is a proposal for discussion and refinement. Decisions are made via discussions such as this Jira – E14 - via thumbs on glass
          Hide
          Larry McCay added a comment -

          Absolutely, Eric. I should have been more clear about that. That summary is what was discussed and suggested to be presented to the community as a way forward. We can and should discuss further for establishing the agenda for the next call. Perhaps, we'll use the dev-common list for proposing the next agenda?

          Show
          Larry McCay added a comment - Absolutely, Eric. I should have been more clear about that. That summary is what was discussed and suggested to be presented to the community as a way forward. We can and should discuss further for establishing the agenda for the next call. Perhaps, we'll use the dev-common list for proposing the next agenda?
          Hide
          Kevin Minder added a comment -

          I'd like to provide another opportunity for anyone interested to discuss and prepare for the DesignLounge @ HadoopSummit session on security. I'll have a WebEx running today at 5pmPT/8pmET/8amCT. As before this will just be a discussion (no decisions) and we will summarize here following the meeting. Here is the proposed high level agenda.

          • Introductions
          • Summarize previous call
          • Discuss goals/agenda/logistics for security DesignLounge@HadoopSummit session
          • Plan required preparatory material for the session

          WebEx details
          -------------------------------------------------------
          Meeting information
          -------------------------------------------------------
          Topic: Hadoop Security
          Date: Wednesday, June 19, 2013
          Time: 5:00 pm, Pacific Daylight Time (San Francisco, GMT-07:00)
          Meeting Number: 625 489 526
          Meeting Password: HadoopSecurity
          -------------------------------------------------------
          To start or join the online meeting
          -------------------------------------------------------
          Go to https://hortonworks.webex.com/hortonworks/j.php?ED=256673687&UID=508554752&PW=NZDdjOTcyNzdi&RT=MiM0
          -------------------------------------------------------
          Audio conference information
          -------------------------------------------------------
          To receive a call back, provide your phone number when you join the meeting, or call the number below and enter the access code.
          Call-in toll-free number (US/Canada): 1-877-668-4493
          Call-in toll number (US/Canada): 1-650-479-3208
          Global call-in numbers: https://hortonworks.webex.com/hortonworks/globalcallin.php?serviceType=MC&ED=256673687&tollFree=1
          Toll-free dialing restrictions: http://www.webex.com/pdf/tollfree_restrictions.pdf
          Access code:625 489 526
          -------------------------------------------------------
          For assistance
          -------------------------------------------------------
          1. Go to https://hortonworks.webex.com/hortonworks/mc
          2. On the left navigation bar, click "Support".
          To add this meeting to your calendar program (for example Microsoft Outlook), click this link:
          https://hortonworks.webex.com/hortonworks/j.php?ED=256673687&UID=508554752&ICS=MS&LD=1&RD=2&ST=1&SHA2=AAAAAtYvvV8MU/6na1FmVxgxSUcpUBRMQ62CB-UdrJ15Wywo
          To check whether you have the appropriate players installed for UCF (Universal Communications Format) rich media files, go to https://hortonworks.webex.com/hortonworks/systemdiagnosis.php.
          http://www.webex.com
          CCM:+16504793208x625489526#
          IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and any documents and other materials exchanged or viewed during the session to be recorded. You should inform all meeting attendees prior to recording if you intend to record the meeting. Please note that any such recordings may be subject to discovery in the event of litigation.

          Show
          Kevin Minder added a comment - I'd like to provide another opportunity for anyone interested to discuss and prepare for the DesignLounge @ HadoopSummit session on security. I'll have a WebEx running today at 5pmPT/8pmET/8amCT. As before this will just be a discussion (no decisions) and we will summarize here following the meeting. Here is the proposed high level agenda. Introductions Summarize previous call Discuss goals/agenda/logistics for security DesignLounge@HadoopSummit session Plan required preparatory material for the session WebEx details ------------------------------------------------------- Meeting information ------------------------------------------------------- Topic: Hadoop Security Date: Wednesday, June 19, 2013 Time: 5:00 pm, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 625 489 526 Meeting Password: HadoopSecurity ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://hortonworks.webex.com/hortonworks/j.php?ED=256673687&UID=508554752&PW=NZDdjOTcyNzdi&RT=MiM0 ------------------------------------------------------- Audio conference information ------------------------------------------------------- To receive a call back, provide your phone number when you join the meeting, or call the number below and enter the access code. Call-in toll-free number (US/Canada): 1-877-668-4493 Call-in toll number (US/Canada): 1-650-479-3208 Global call-in numbers: https://hortonworks.webex.com/hortonworks/globalcallin.php?serviceType=MC&ED=256673687&tollFree=1 Toll-free dialing restrictions: http://www.webex.com/pdf/tollfree_restrictions.pdf Access code:625 489 526 ------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://hortonworks.webex.com/hortonworks/mc 2. On the left navigation bar, click "Support". To add this meeting to your calendar program (for example Microsoft Outlook), click this link: https://hortonworks.webex.com/hortonworks/j.php?ED=256673687&UID=508554752&ICS=MS&LD=1&RD=2&ST=1&SHA2=AAAAAtYvvV8MU/6na1FmVxgxSUcpUBRMQ62CB-UdrJ15Wywo To check whether you have the appropriate players installed for UCF (Universal Communications Format) rich media files, go to https://hortonworks.webex.com/hortonworks/systemdiagnosis.php . http://www.webex.com CCM:+16504793208x625489526# IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and any documents and other materials exchanged or viewed during the session to be recorded. You should inform all meeting attendees prior to recording if you intend to record the meeting. Please note that any such recordings may be subject to discovery in the event of litigation.
          Hide
          Kevin Minder added a comment -

          Relevant security related docs attached to this HADOOP-9621

          Show
          Kevin Minder added a comment - Relevant security related docs attached to this HADOOP-9621
          Hide
          Kevin Minder added a comment -

          This is a summary of the discussion that occurred during the above meeting.

          – Attendees –
          Andrew Purtell, Brian Swan, Benoy Antong, Avik Dey, Kai Zheng, Kyle Leckie, LarryMcCay, Kevin Minder, Tianyou Li

          – Goals & Perspective –

          Hortonworks

          • Plug into any enterprise Idp infrastructure
          • Enhance Hadoop security model to better support perimeter security
          • Align client programming model for different Hadoop deployment models

          Microsoft

          • Support pluggable identity providers: ActiveDirectory, cloud and beyond
          • Enhance user isolation within Hadoop cluster

          Intel

          • Support token based authentication
          • Support fine grained authorization
          • Seamless identity delegation at every layer
          • Support single sign on: from user's desktop, between Hadoop cluster
          • Pluggable at every level
          • Provide a security "toolkit" that would be integrated across the ecosystem
          • Must be backward compatible
          • Must take both RPC and HTTP into account and should follow common model

          eBay

          • Integrate better with eBay SSO
          • Provide SSO integration at RPC layer

          – Summit Planning –

          • Think of Summit session as a "meet and greet" and "Kickoff" of cross cutting security community
          • Create a new Jira to collect high-level use cases, goals and usability
          • Use time at summit to approach design at a whiteboard from a "clean slate" perspective against those use cases and goals
          • Get a sense of how we can divide and conqueror problem space
          • Figure out how best to collaborate
          • Figure out how we can all get "hacking" on this ASAP

          – Ideas –

          • Foster a security community within the Hadoop community
          • Suggest creating a focused security-dev type community mailing list
          • Suggest creating a wiki area devoted to overall security efforts
          • Ideally Current independent designs will inform a collaborative design, pull in best of existing code to accelerate
          • Link the security doc Jira HADOOP-9621 to other related security Jiras

          – Questions –

          • What would central token authority (i.e. HSSO) provide beyond what the work that is already being done?
            HADOOP-9479 (Benoy Antony)
            HADOOP-8779 (Daryn Sharp)
          • How can HSSO and TAS work together? What is the relationship?
          Show
          Kevin Minder added a comment - This is a summary of the discussion that occurred during the above meeting. – Attendees – Andrew Purtell, Brian Swan, Benoy Antong, Avik Dey, Kai Zheng, Kyle Leckie, LarryMcCay, Kevin Minder, Tianyou Li – Goals & Perspective – Hortonworks Plug into any enterprise Idp infrastructure Enhance Hadoop security model to better support perimeter security Align client programming model for different Hadoop deployment models Microsoft Support pluggable identity providers: ActiveDirectory, cloud and beyond Enhance user isolation within Hadoop cluster Intel Support token based authentication Support fine grained authorization Seamless identity delegation at every layer Support single sign on: from user's desktop, between Hadoop cluster Pluggable at every level Provide a security "toolkit" that would be integrated across the ecosystem Must be backward compatible Must take both RPC and HTTP into account and should follow common model eBay Integrate better with eBay SSO Provide SSO integration at RPC layer – Summit Planning – Think of Summit session as a "meet and greet" and "Kickoff" of cross cutting security community Create a new Jira to collect high-level use cases, goals and usability Use time at summit to approach design at a whiteboard from a "clean slate" perspective against those use cases and goals Get a sense of how we can divide and conqueror problem space Figure out how best to collaborate Figure out how we can all get "hacking" on this ASAP – Ideas – Foster a security community within the Hadoop community Suggest creating a focused security-dev type community mailing list Suggest creating a wiki area devoted to overall security efforts Ideally Current independent designs will inform a collaborative design, pull in best of existing code to accelerate Link the security doc Jira HADOOP-9621 to other related security Jiras – Questions – What would central token authority (i.e. HSSO) provide beyond what the work that is already being done? HADOOP-9479 (Benoy Antony) HADOOP-8779 (Daryn Sharp) How can HSSO and TAS work together? What is the relationship?
          Hide
          Larry McCay added a comment -
          • Summit Summary -

          Last week at Hadoop Summit there was a room dedicated as the summit Design Lounge.
          This was a place where folks could get together and talk about design issues with other contributors with a simple flip-board and some beanbag chairs.
          We used this as an opportunity to bootstrap some discussions within common-dev for security related topics. I'd like to summarize the security session and takeaways here for everyone.

          This summary and set of takeaways are largely from memory.
          Please feel free to correct anything that is inaccurate or omitted.

          Pretty well attended - don't recall all the names but some of the companies represented:

          • Yahoo!
          • Microsoft
          • Hortonworks
          • Intel
          • eBay
          • Voltage Security
          • Flying Penguins
          • EMC
          • others...

          We set expectations as a meet and greet/project kickoff - project being the emerging security development community.
          Most folks were pretty engaged throughout the session.

          In order to keep the scope of conversations manageable we tried to remain focused on authentication and the ideas around SSO and tokens.

          We discussed kerberos as:
          1. major pain point and barrier to entry for some
          2. seemingly perfect for others
          a. obviously requiring backward compatibility

          It seemed to be consensus that:
          1. user authentication should be easily integrated with alternative enterprise identity solutions
          2. that service identity issues should not require thousands of service identities added to enterprise user repositories
          3. that customers should not be forced to install/deploy and manage a KDC for services - this implies a couple options:
          a. alternatives to kerberos for service identities
          b. hadoop KDC implementation - ie. ApacheDS?

          There was active discussion around:
          1. Hadoop SSO server
          a. acknowledgement of Hadoop SSO tokens as something that can be standardized for representing both the identity and authentication event data as well and access tokens representing a verifiable means for the authenticated identity to access resources or services
          b. a general understanding of Hadoop SSO as being an analogue and alternative for the kerberos KDC and the related tokens being analogous to TGTs and service tickets
          c. an agreement that there are interesting attributes about the authentication event that may be useful in cross cluster trust for SSO - such as a rating of authentication strength and number of factors, etc
          d. that existing Hadoop tokens - ie. delegation, job, block access - will all continue to work and that we are initially looking at alternatives to the KDC, TGTs and service tickets
          2. authentication mechanism discovery by clients - Daryn Sharp has done a bunch of work around this and our SSO solution may want to consider a similar mechanism for discovering trusted IDPs and service endpoints
          3. backward compatibility - kerberos shops need to just continue to work
          4. some insight into where/how folks believe that token based authentication can be accomplished within existing contracts - SASL/GSSAPI, REST, web ui
          5. what the establishment of a cross cutting concern community around security and what that means in terms of the Apache way - email lists, wiki, Jiras across projects, etc
          6. dependencies, rolling updates, patching and how it related to hadoop projects versus packaging
          7. collaboration road ahead

          A number of breakout discussions were had outside of the designated design lounge session as well.

          Takeaways for the immediate road ahead:
          1. common-dev may be sufficient to discuss security related topics
          a. many developers are already subscribed to it
          b. there is not that much traffic there anyway
          c. we can discuss a more security focused list if we like
          2. we will discuss the establishment of a wiki space for a holistic view of security model, patterns, approaches, etc
          3. we will begin discussion on common-dev in near-term for the following:
          a. discuss and agree on the high level moving parts required for our goals for authentication: SSO service, tokens, token validation handlers, credential management tools, etc
          b. discuss and agree on the natural seams across these moving parts and agree on collaboration by tackling various pieces in a divide and conquer approach
          c. more than likely - the first piece that will need some immediate discussion will be the shape and form of the tokens
          d. we will follow up or supplement discussions with POC code patches and/or specs attached to jiras

          Overall, design lounge was rather effective for what we wanted to do - which was to bootstrap discussions and collaboration within the community at large. As always, no specific decisions have been made during this session and we can discuss any or all of this within common-dev and on related jiras.

          Jiras related to the security development group and these discussions:

          Centralized SSO/Token Server https://issues.apache.org/jira/browse/HADOOP-9533
          Token based authentication and SSO https://issues.apache.org/jira/browse/HADOOP-9392
          Document/analyze current Hadoop security model https://issues.apache.org/jira/browse/HADOOP-9621
          Improve Hadoop security - Use cases https://issues.apache.org/jira/browse/HADOOP-9671

          Show
          Larry McCay added a comment - Summit Summary - Last week at Hadoop Summit there was a room dedicated as the summit Design Lounge. This was a place where folks could get together and talk about design issues with other contributors with a simple flip-board and some beanbag chairs. We used this as an opportunity to bootstrap some discussions within common-dev for security related topics. I'd like to summarize the security session and takeaways here for everyone. This summary and set of takeaways are largely from memory. Please feel free to correct anything that is inaccurate or omitted. Pretty well attended - don't recall all the names but some of the companies represented: Yahoo! Microsoft Hortonworks Intel eBay Voltage Security Flying Penguins EMC others... We set expectations as a meet and greet/project kickoff - project being the emerging security development community. Most folks were pretty engaged throughout the session. In order to keep the scope of conversations manageable we tried to remain focused on authentication and the ideas around SSO and tokens. We discussed kerberos as: 1. major pain point and barrier to entry for some 2. seemingly perfect for others a. obviously requiring backward compatibility It seemed to be consensus that: 1. user authentication should be easily integrated with alternative enterprise identity solutions 2. that service identity issues should not require thousands of service identities added to enterprise user repositories 3. that customers should not be forced to install/deploy and manage a KDC for services - this implies a couple options: a. alternatives to kerberos for service identities b. hadoop KDC implementation - ie. ApacheDS? There was active discussion around: 1. Hadoop SSO server a. acknowledgement of Hadoop SSO tokens as something that can be standardized for representing both the identity and authentication event data as well and access tokens representing a verifiable means for the authenticated identity to access resources or services b. a general understanding of Hadoop SSO as being an analogue and alternative for the kerberos KDC and the related tokens being analogous to TGTs and service tickets c. an agreement that there are interesting attributes about the authentication event that may be useful in cross cluster trust for SSO - such as a rating of authentication strength and number of factors, etc d. that existing Hadoop tokens - ie. delegation, job, block access - will all continue to work and that we are initially looking at alternatives to the KDC, TGTs and service tickets 2. authentication mechanism discovery by clients - Daryn Sharp has done a bunch of work around this and our SSO solution may want to consider a similar mechanism for discovering trusted IDPs and service endpoints 3. backward compatibility - kerberos shops need to just continue to work 4. some insight into where/how folks believe that token based authentication can be accomplished within existing contracts - SASL/GSSAPI, REST, web ui 5. what the establishment of a cross cutting concern community around security and what that means in terms of the Apache way - email lists, wiki, Jiras across projects, etc 6. dependencies, rolling updates, patching and how it related to hadoop projects versus packaging 7. collaboration road ahead A number of breakout discussions were had outside of the designated design lounge session as well. Takeaways for the immediate road ahead: 1. common-dev may be sufficient to discuss security related topics a. many developers are already subscribed to it b. there is not that much traffic there anyway c. we can discuss a more security focused list if we like 2. we will discuss the establishment of a wiki space for a holistic view of security model, patterns, approaches, etc 3. we will begin discussion on common-dev in near-term for the following: a. discuss and agree on the high level moving parts required for our goals for authentication: SSO service, tokens, token validation handlers, credential management tools, etc b. discuss and agree on the natural seams across these moving parts and agree on collaboration by tackling various pieces in a divide and conquer approach c. more than likely - the first piece that will need some immediate discussion will be the shape and form of the tokens d. we will follow up or supplement discussions with POC code patches and/or specs attached to jiras Overall, design lounge was rather effective for what we wanted to do - which was to bootstrap discussions and collaboration within the community at large. As always, no specific decisions have been made during this session and we can discuss any or all of this within common-dev and on related jiras. Jiras related to the security development group and these discussions: Centralized SSO/Token Server https://issues.apache.org/jira/browse/HADOOP-9533 Token based authentication and SSO https://issues.apache.org/jira/browse/HADOOP-9392 Document/analyze current Hadoop security model https://issues.apache.org/jira/browse/HADOOP-9621 Improve Hadoop security - Use cases https://issues.apache.org/jira/browse/HADOOP-9671
          Hide
          Larry McCay added a comment -

          Just realized that I failed to mention that Cloudera was also represented - sorry Aaron!

          Show
          Larry McCay added a comment - Just realized that I failed to mention that Cloudera was also represented - sorry Aaron!

            People

            • Assignee:
              Larry McCay
              Reporter:
              Larry McCay
            • Votes:
              0 Vote for this issue
              Watchers:
              53 Start watching this issue

              Dates

              • Created:
                Updated:

                Time Tracking

                Estimated:
                Original Estimate - 1,176h
                1,176h
                Remaining:
                Remaining Estimate - 1,176h
                1,176h
                Logged:
                Time Spent - Not Specified
                Not Specified

                  Development