Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-15528

Not able to list encryption zone with federation

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • encryption, federation
    • None

    Description

       hdfs crypto -listZones
      IllegalArgumentException: 'viewfs://cluster14' is not an HDFS URI.

       

      ------

      debug log

      20/08/12 05:53:14 DEBUG util.Shell: setsid exited with exit code 0
      20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
      20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
      20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
      20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures since startup])
      20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login])
      20/08/12 05:53:14 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
      20/08/12 05:53:14 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true
      20/08/12 05:53:14 DEBUG security.Groups: Creating new Groups object
      20/08/12 05:53:14 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000; warningDeltaMs=5000
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login commit
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: using kerberos user:hdfs@CORP.EPSILON.COM
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: Using user: "hdfs@CORP.EPSILON.COM" with name hdfs@CORP.EPSILON.COM
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: User entry: "hdfs@CORP.EPSILON.COM"
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: UGI loginUser:hdfs@CORP.EPSILON.COM (auth:KERBEROS)
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: Current time is 1597233194735
      20/08/12 05:53:14 DEBUG security.UserGroupInformation: Next refresh is 1597261977000
      20/08/12 05:53:14 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
      20/08/12 05:53:14 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers
      20/08/12 05:53:14 DEBUG fs.FileSystem: Loading filesystems
      20/08/12 05:53:14 DEBUG fs.FileSystem: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-aws-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: gs:// = class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/gcs-connector-hadoop3-1.9.10-cdh6.2.1-shaded.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: ftp:// = class org.apache.hadoop.fs.ftp.FTPFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-hdfs-client-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-hdfs-client-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-hdfs-client-3.0.0-cdh6.2.1.jar
      20/08/12 05:53:14 DEBUG fs.FileSystem: Looking for FS supporting viewfs
      20/08/12 05:53:14 DEBUG fs.FileSystem: looking for configuration option fs.viewfs.impl
      20/08/12 05:53:14 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
      20/08/12 05:53:14 DEBUG fs.FileSystem: FS for viewfs is class org.apache.hadoop.fs.viewfs.ViewFileSystem
      20/08/12 05:53:14 DEBUG fs.FileSystem: Looking for FS supporting hdfs
      20/08/12 05:53:14 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
      20/08/12 05:53:14 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
      20/08/12 05:53:14 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:14 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
      20/08/12 05:53:14 DEBUG hdfs.HAUtilClient: No HA service delegation token found for logical URI hdfs://pii/user
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:14 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:14 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
      20/08/12 05:53:14 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@291b4bf5
      20/08/12 05:53:14 DEBUG ipc.Client: ipc.client.bind.wildcard.addr set to true. Will bind client sockets to wildcard address.
      20/08/12 05:53:14 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
      20/08/12 05:53:15 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
      20/08/12 05:53:15 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@3310b42: starting with interruptCheckPeriodMs = 60000
      20/08/12 05:53:15 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
      20/08/12 05:53:15 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
      20/08/12 05:53:15 DEBUG fs.FileSystem: Looking for FS supporting hdfs
      20/08/12 05:53:15 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
      20/08/12 05:53:15 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
      20/08/12 05:53:15 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:15 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
      20/08/12 05:53:15 DEBUG hdfs.HAUtilClient: No HA service delegation token found for logical URI hdfs://attribute/attribute
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:15 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
      20/08/12 05:53:15 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
      20/08/12 05:53:15 DEBUG fs.FileSystem: Looking for FS supporting hdfs
      20/08/12 05:53:15 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl
      20/08/12 05:53:15 DEBUG fs.FileSystem: Looking in service filesystems for implementation class
      20/08/12 05:53:15 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:15 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
      20/08/12 05:53:15 DEBUG hdfs.HAUtilClient: No HA service delegation token found for logical URI hdfs://mapping/
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false
      20/08/12 05:53:15 DEBUG impl.DfsClientConf: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
      20/08/12 05:53:15 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
      20/08/12 05:53:15 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
      IllegalArgumentException: 'viewfs://cluster14' is not an HDFS URI.
      20/08/12 05:53:15 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@1ebd319f
      20/08/12 05:53:15 DEBUG ipc.Client: Stopping client
      20/08/12 05:53:15 DEBUG util.ShutdownHookManager: Completed shutdown in 0.003 seconds; Timeouts: 0
      20/08/12 05:53:15 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              thangamani.murugasamy@epsilon.com Thangamani Murugasamy
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: